Prosecution Insights
Last updated: April 19, 2026
Application No. 18/908,368

SYSTEM AND METHOD FOR DETERMINING WHETHER A CAMERA COMPONENT IS DAMAGED

Non-Final OA §102§103§112§DP
Filed
Oct 07, 2024
Examiner
MONK, MARK T
Art Unit
2637
Tech Center
2600 — Communications
Assignee
BLANCCO TECHNOLOGY GROUP IP OY
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
446 granted / 588 resolved
+13.9% vs TC avg
Strong +20% interview lift
Without
With
+20.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
15 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 588 resolved cases

Office Action

§102 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 20 and 35, it is unclear whether claimed “a light source” is the same the light source in claims 1 and 33. Claim Objections Claim 27 is objected to because of the following informalities: claim 27 depends on claim 25 but for stating “the multiple examples”, it should depend from claim 26. Appropriate correction is required. Claims 1 – 4, 14, 15, 20 – 22, 33, and 48 are objected to because of the following informalities: The word “analysing” is misspelled. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1 – 48 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 - 36 of U.S. Patent No. 12,112,505, Valtonen et al. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in this application are broader than the claims in patent 12,112,505. Regarding claim 1 Valtonen et al patent 12,112,505 discloses of applicant’s computer-implemented method for determining whether a camera component of a camera is damaged comprising: obtaining information relating to one or more damage indicators; obtaining, from the camera, at least one image which has been taken when light from a light source has been incident on the camera component; dividing the image into one or more areas; analysing each area to determine whether it comprises at least one of the one or more damage indicators; and based on said analysing, providing an indication of whether the camera component is classified as damaged or undamaged (claim, 1 A computer-implemented method for determining whether a camera component of a camera is damaged comprising: obtaining information relating to one or more damage indicators; obtaining, from the camera, at least one image which has been taken when light from a light source has been incident on the camera component; dividing the image into one or more areas; analyzing each area to determine whether it comprises at least one of the one or more damage indicators; based on said analyzing, providing an indication of whether the camera component is classified as damaged or undamaged). Regarding claim 2 Valtonen et al patent 12,112,505 discloses of applicant’s wherein: the information relating to one or more damage indicators comprises a known shape of the light source such that the one or more damage indicators correspond to a lack of a corresponding shape in the image; the image comprises an imaged shape resulting from the light source; and the step of analysing each area comprises determining whether, based on the known shape of the light source, the imaged shape is as expected when the camera component is undamaged and/or when the camera component is damaged (claim, 1 and wherein: the information relating to one or more damage indicators comprises a known shape of the light source such that the one or more damage indicators correspond to a lack of a corresponding shape in the image; the image comprises an imaged shape resulting from the light source; and the step of analyzing each area further comprises determining whether, based on the known shape of the light source, the imaged shape is as expected for the case when the camera component is undamaged and/or for the case when the camera component is damaged). Regarding claim 3 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the step of analysing each area comprises digitally comparing the imaged shape with the known shape (claim, 2 wherein the step of analyzing each area comprises: one or more of: digitally comparing the imaged shape with the known shape). Regarding claim 4 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the step of analysing each area comprises using spatial mathematics to compare the imaged shape with the known shape (claim, 2 and using spatial mathematics to compare the imaged shape with the known shape). Regarding claim 5 Valtonen et al patent 12,112,505 discloses of applicant’s comprising generating an outline of the known shape on top of the imaged shape and calculating a percentage of bright pixels, from the imaged shape, that fit within the outline (claim, 3 generating an outline of the known shape on top of the imaged shape and calculating a percentage of bright pixels, from the imaged shape, that fit within the outline). Regarding claim 6 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the step of generating an outline of the known shape on top of the imaged shape comprises detecting a centre of the brightest area in the image, drawing the outline of the known shape around the centre, checking if the brightest area extends beyond the outline or checking if the brightest area does not extend to the outline and adjusting a size of the outline such that the brightest area extends to the outline in at least one direction (claim, 4 wherein the step of generating an outline of the known shape on top of the imaged shape comprises: detecting a center of the brightest area in the image; drawing the outline of the known shape around the center; checking if the brightest area extends beyond the outline or checking if the brightest area does not extend to the outline; and adjusting the size of the outline such that the brightest area extends to the outline in at least one direction). Regarding claim 7 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the step of calculating a percentage of bright pixels, from the imaged shape, that fit within the outline comprises determining a maximum luminosity of the imaged shape, determining a number of bright pixels within the outline having a luminosity within a predetermined threshold of the maximum luminosity, and dividing said number of bright pixels by a total number of pixels within the outline (claim, 5 wherein the step of calculating a percentage of bright pixels, from the imaged shape, that fit within the outline comprises determining a maximum luminosity of the imaged shape, determining the number of bright pixels within the outline having a luminosity within a predetermined threshold of the maximum luminosity, and dividing said number of bright pixels by a total number of pixels within the outline). Regarding claim 8 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the predetermined threshold is 90% of the maximum luminosity (claim, 6 wherein the predetermined threshold is 90% of the maximum luminosity). Regarding claim 9 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the camera component is determined to be damaged if the percentage of bright pixels from the imaged shape that fit within the outline is less than 90% (claim, 7 wherein the camera component is determined to be damaged if the percentage of bright pixels from the imaged shape that fit within the outline is less than 90%). Regarding claim 10 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the known shape is a circle or an essentially round or elliptical area (claim, 8 wherein the known shape is a circle or an essentially round or elliptical area). Regarding claim 11 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the light source is present in a field of view of the camera or the light source is in a vicinity of the field of view, when the image is taken (claim, 9 wherein the light source is present in a field of view of the camera or the light source is in a vicinity of the field of view, when the image is taken). Regarding claim 12 Valtonen et al patent 12,112,505 discloses of applicant’s further comprising taking the image (claim, 10 further comprising taking the image). Regarding claim 13 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the one or more damage indicators comprise one or more artifact, pattern, contrast change, saturated region, blurred area, chromatic effect, light streak or other symptom (claim, 11 wherein the one or more damage indicators comprise one or more artifact, pattern, contrast change, saturated region, blurred area, chromatic effect, light streak or other symptom). Regarding claim 14 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the step of analysing each area comprises using a statistical analysis to determine whether at least one of the one or more damage indicators is present (claim, 12 wherein the step of analyzing each area further comprises one or more of: and using a statistical analysis to determine whether at least one of the one or more damage indicators is present). Regarding claim 15 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the step of analysing each area comprises calculating an optical parameter for each area and determining whether each optical parameter is indicative of at least one of the one or more damage indicators (claim, 12 wherein the step of analyzing each area further comprises one or more of: calculating an optical parameter for each area and determining whether each optical parameter is indicative of at least one of the one or more damage indicators). Regarding claim 16 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the optical parameter comprises one or more of: a colour; a wavelength; a luminosity; an intensity or a contrast (claim, 13 wherein the optical parameter comprises one or more of: a colour; a wavelength; a luminosity; an intensity or a contrast). Regarding claim 17 Valtonen et al patent 12,112,505 discloses of applicant’s comprising calculating an average optical parameter for each area and determining whether each average optical parameter is indicative of at least one of the one or more damage indicators (claim, 14 further comprising calculating an average optical parameter for each area and determining whether each average optical parameter is indicative of at least one of the one or more damage indicators). Regarding claim 18 Valtonen et al patent 12,112,505 discloses of applicant’s comprising determining a percentage of a total number of pixels within each area, for which the optical parameter is within a predetermined range (claim, 15 further comprising determining a percentage of a total number of pixels within each area, for which the optical parameter is within a predetermined range). Regarding claim 19 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the predetermined range is 90% or more of an expected optical parameter (claim, 16 wherein the predetermined range is 90% or more of an expected optical parameter). Regarding claim 20 Valtonen et al patent 12,112,505 discloses of applicant’s further comprising negating a light source in the image by: determining a brightest region corresponding to an area of greatest intensity in the image and all adjacent areas having an intensity in a pre-determined range of the greatest intensity; and excluding the brightest region from the step of analysing each area (claim, 17 further comprising negating the light source in the image by: determining a brightest region corresponding to an area of greatest intensity in the image and all adjacent areas having an intensity in a pre-determined range of the greatest intensity; and excluding the brightest region from the step of analyzing each area). Regarding claim 21 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the step of analysing each area comprises using a trained machine learning algorithm to classify each area as comprising none of the one or more damage indicators or at least one of the one or more damage indicators (claim, 18 wherein the step of analyzing each area further comprises using a trained machine learning algorithm: to classify each area as comprising none of the one or more damage indicators or at least one of the one or more damage indicators). Regarding claim 22 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the step of analysing each area comprises using a trained machine learning algorithm to classify each area as resulting from a damaged or undamaged camera component (claim, 18 wherein the step of analyzing each area further comprises using a trained machine learning algorithm: to classify each area as resulting from a damaged or undamaged camera component). Regarding claim 23 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the machine learning algorithm comprises a neural network (claim, 19 wherein the machine learning algorithm comprises one or more of: a neural network). Regarding claim 24 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the machine learning algorithm comprises a deep learning algorithm (claim, 19 wherein the machine learning algorithm comprises one or more of: a deep learning algorithm). Regarding claim 25 Valtonen et al patent 12,112,505 discloses of applicant’s comprising: extracting information from each area; comparing the extracted information against one or more predetermined probability vectors to establish whether the area should be classified as comprising none of the one or more damage indicators or at least one of the one or more damage indicators; and calculating a probability that the area is correctly classified (claim, 20 extracting information from each area; comparing the extracted information against one or more predetermined probability vectors to establish whether the area should be classified as comprising none of the one or more damage indicators or at least one of the one or more damage indicators; and calculating a probability that the area is correctly classified). Regarding claim 26 Valtonen et al patent 12,112,505 discloses of applicant’s further comprising training the machine learning algorithm by providing multiple examples of images from damaged and undamaged camera components (claim, 21 further comprising training the machine learning algorithm by providing multiple examples of images from damaged and undamaged camera components). Regarding claim 27 Valtonen et al patent 12,112,505 discloses of applicant’s wherein, during training, the machine learning algorithm performs the following processes: extracting information from multiple examples; transforming the extracted information into information matrices; manipulating the information matrices into combined matrices; and using the combined matrices to establish a probability vector for each classification (claim, 22 wherein, during training, the machine learning algorithm performs the following processes: extracting information from the multiple examples; transforming the extracted information into information matrices; manipulating the information matrices into combined matrices; and using the combined matrices to establish a probability vector for each classification). Regarding claim 28 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the image comprises a neutral background (claim, 23 wherein the image comprises a neutral background). Regarding claim 29 Valtonen et al patent 12,112,505 discloses of applicant’s comprising calculating a percentage of the areas determined as comprising at least one of the one or more damage indicators, compared to all areas of a single image, and classifying the camera component as damaged if the percentage is at least 1%, 2%, 5% or 10% (claim, 24 comprising calculating a percentage of the areas determined as comprising at least one of the one or more damage indicators, compared to all areas of a single image, and classifying the camera component as damaged if the percentage is at least 1%, 2%, 5% or 10%). Regarding claim 30 Valtonen et al patent 12,112,505 discloses of applicant’s wherein images from damaged camera components are further classified as resulting from defective or destroyed components (claim, 25 wherein images from damaged camera components are further classified as resulting from defective or destroyed components). Regarding claim 31 Valtonen et al patent 12,112,505 discloses of applicant’s wherein defective components are further classified as scratched, dented, dislocated, distorted or opaque (claim, 26 wherein defective components are further classified as scratched, dented, dislocated, distorted or opaque). Regarding claim 32 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the camera component is a camera lens, window or transparent front element (claim, 27 wherein the camera component is a camera lens, window or transparent front element). Regarding claim 33 Valtonen et al patent 12,112,505 discloses of applicant’s A system for determining whether a camera component of a camera is damaged, the system comprising: a non-transitory computer-readable medium comprising programming instructions operable by a processor to carry out the following steps: obtaining information relating to one or more damage indicators; obtaining, from the camera, at least one image which has been taken when light from a light source has been incident on the camera component; dividing the image into one or more areas; analysing each area to determine whether it comprises at least one of the one or more damage indicators; and based on said analysing, providing an indication of whether the camera component is classified as damaged or undamaged (claim, 28 A system for determining whether a camera component of a camera is damaged, the system comprising: a non-transitory computer-readable medium comprising programming instructions operable by a processor to carry out the following steps: obtaining information relating to one or more damage indicators; obtaining, from the camera, at least one image which has been taken when light from a light source has been incident on the camera component; dividing the image into one or more areas; analyzing each area to determine whether it comprises at least one of the one or more damage indicators; based on said analyzing, providing an indication of whether the camera component is classified as damaged or undamaged). Regarding claim 34 Valtonen et al patent 12,112,505 discloses of applicant’s operable by a processor associated with the camera or a diagnostic processor when in communication with the camera (claim, 28 a diagnostic processor when in communication with the camera). Regarding claim 35 Valtonen et al patent 12,112,505 discloses of applicant’s further comprising a light source arranged to provide light incident on the camera component (claim, 28 the light source arranged to provide light incident on the camera component). Regarding claim 36 Valtonen et al patent 12,112,505 discloses of applicant’s comprising a fibre optic cable arranged to direct light from the light source to the camera component (claim, 28 a fibre fiber optic cable arranged to direct light from the light source to the camera component). Regarding claim 37 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the light source has a known shape (claim, 28 wherein: the light source has a known shape). Regarding claim 38 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the light source is arranged outside of a field of view of the camera (claim, 28 the light source is arranged outside of a field of view of the camera). Regarding claim 39 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the light source and /or camera is movable such that different images can be taken at different angles of illumination (claim, 28 the light source and/or camera is movable such that different images can be taken at different angles of illumination). Regarding claim 40 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the light source is a white light source (claim, 28 the light source is a white light source). Regarding claim 41 Valtonen et al patent 12,112,505 discloses of applicant’s comprising a controller configured to activate said light source when an image is taken (claim, 29 a controller configured to activate said light source when an image is taken). Regarding claim 42 Valtonen et al patent 12,112,505 discloses of applicant’s further comprising a neutral background such that the light source is at least 10 times more luminous than the background (claim, 30 further comprising a neutral background such that the light source is at least 10 times more luminous than the background). Regarding claim 43 Valtonen et al patent 12,112,505 discloses of applicant’s further comprising a focal feature for the camera to focus on when taking the image (claim, 31 further comprising a focal feature for the camera to focus on when taking the image). Regarding claim 44 Valtonen et al patent 12,112,505 discloses of applicant’s further comprising a holder and/or robotic arm configured to position the camera for taking the image (claim, 32 further comprising a holder and/or robotic arm configured to position the camera for taking the image). Regarding claim 45 Valtonen et al patent 12,112,505 discloses of applicant’s wherein the camera is provided on a mobile device (claim, 33 wherein the camera is provided on a mobile device). Regarding claim 46 Valtonen et al patent 12,112,505 discloses of applicant’s further comprising the diagnostic processor and a communication means for communication with the camera (claim, 34 wherein the diagnostic processor further comprises a communication means for communication with the camera). Regarding claim 47 Valtonen et al patent 12,112,505 discloses of applicant’s wherein multiple light sources are arranged to provide light to the camera component and a controller is configured to turn each individual light source on and off such that one or more of the multiple light sources is active when an image is taken (claim, 35 wherein multiple light sources are arranged to provide light to the camera component and a controller is configured to turn each individual light source on and off such that one or more of the multiple light sources is active when an image is taken). Regarding claim 48 Valtonen et al patent 12,112,505 discloses of applicant’s A computer-implemented method for determining whether a camera component of a camera is damaged comprising: obtaining information relating to one or more damage indicators; obtaining, from the camera, at least one image which has been taken when light from a light source is incident on the camera component; dividing the image into one or more areas; analysing each area to determine whether it comprises at least one of the one or more damage indicators; and based on said analysing, providing an indication of whether the camera component is classified as damaged or undamaged; wherein the information relating to one or more damage indicators comprises a known shape of the light source such that the one or more damage indicators correspond to a lack of a corresponding shape in the image (claim, 36 A computer-implemented method for determining whether a camera component of a camera is damaged comprising: performing the following steps: obtaining information relating to one or more damage indicators; obtaining, from the camera, at least one image which has been taken when light from a light source has been incident on the camera component; dividing the image into one or more areas; analyzing each area to determine whether it comprises at least one of the one or more damage indicators; and based on said analyzing, providing an indication of whether the camera component is classified as damaged or undamaged and from claim 1 wherein: the information relating to one or more damage indicators comprises a known shape of the light source such that the one or more damage indicators correspond to a lack of a corresponding shape in the image). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 11 – 16, 28, 30 – 32, 37 – 39, and 41 is/are rejected under 35 U.S.C. 102b as being anticipated by Robins et al US Publication No. 2003/0193604. Regarding claim 1 Robins et al discloses of Fig. 1 – 7C, of applicant’s a computer-implemented method (paragraph 0034 the operation of the digital camera 20 is controlled by a microprocessor 60) for determining whether a camera component of a camera is damaged (paragraph 0039 the operation of a camera lens contamination detection damage system is performed by (paragraph 0034) microprocessor 60 (paragraph 0046) which will provide an indication to the user that there is likely contamination contained on the first lens element 42). Robins et al further discloses of applicant’s obtaining information relating to one or more damage indicators; obtaining, from the camera, at least one image which has been taken when light from a light source has been incident on the camera component; dividing the image into one (or more) areas; analysing each area to determine whether it comprises at least one of the one or more damage indicators; and based on said analysing, providing an indication of whether the camera component is classified as damaged or undamaged (paragraph 0034 the operation of the digital camera 20 is controlled by a microprocessor 60 where (paragraph 0039) the operation of a camera lens contamination detection system is performed. Paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image. Then comparing the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on, this means that there was a significant amount of scattered light from the LED's 38, in FIG. 2, and that there is contamination on the first lens element 42 and the microprocessor 60 will provide an indication to the user that there is likely contamination contained on the first lens element 42 such that microprocessor 60 obtains sensed magnitude information relating to one or more contamination damage indicators in an image; obtaining, from the camera 20, at least one captured image which has been taken when light from a light source LEDs 38/38a has been incident on the camera lens component; dividing the image into one (or more) area to determine contamination; analysing each area, captured image area, to determine whether it comprises at least one of the one or more contamination damage indicators by comparing the captured image to baseline image with a baseline image; and based on said analysing, providing an indication of whether the camera component is classified as contamination damaged per the comparison that indicates a difference of a predetermined sensed magnitude when the lens cap is on or undamaged by no indication of a predetermined sensed magnitude). Regarding claim 11 Robins et al further discloses of applicant’s wherein the light source is in a vicinity of the field of view, when the image is taken (0019 LEDs 38 are arranged around the inner periphery of the lens barrel 24 where (paragraph 0041) the process moves to an activate shutter step 108 in which the digital camera 20 (as controlled by the microprocessor 60) will cause the shutter drive 70 to open the shutter mechanism 50 to allow any light from LEDs 38/38a which is scattered through the lens system by contamination on the first lens element 42 to reach the electronic image sensor 52 such that the light source 38 is in a vicinity of the field of view of camera lens barrel24 when the image is taken with shutter drive 70) or the light source is present in a field of view of the camera. Regarding claim 12 Robins et al further discloses of applicant’s further comprising taking the image (paragraph 0041 microprocessor 60 takes the captured image). Regarding claim 13 Robins et al further discloses of applicant’s wherein the one or more damage indicators comprise one or more artifact (paragraph 0041 the one or more damage indicators comprise one or more contamination artifact (pattern, contrast change, saturated region, blurred area, chromatic effect, light streak or other symptom). Regarding claim 14 Robins et al further discloses of applicant’s wherein the step of analysing each area comprises using a statistical analysis to determine whether at least one of the one or more damage indicators is present (paragraph 0049, Fig. 5A – 7C, the histograms shown in FIGS. 5A-5C and 6A-6C being readily apparent, and indicative of a significant amount of contamination on the first lens element 42 (shown in FIG. 2). It will be appreciated by those skilled in the art that if the lens cap 36 (shown in FIG. 2) were removed from the digital camera 20, the shift in the pixel signature would be much more dramatic as illustrated in FIG. 7. Thus, by comparing the shift, the tests performed in the lens cap off determination step 114 and the image within range determination 118 of FIG. 4 may be accomplished with relative ease. In the sensing system embodiment having the pixel values shown in FIGS. 5-7, only red light LEDs 38 (or 38a) have been employed. Use of LEDs in a single spectral range, such as red, further facilitates comparison of signals because the increase in intensity associated with contamination appears primarily in one channel. In this case it appears in the red signal channel 150. Thus for base image comparison in which a red LED 38/38a is used only a base image red color component 150 graph 151, such as shown in FIG. 5A, is compared to the current red color component 150 image graph 160, such as shown in FIG. 6A. A significant difference, as indicated at 161 in FIG. 6A, indicates the existence of contamination on the camera lens. In other embodiments (particularly those employing white contamination detection light) two or more signal channels may be compared to determine if contamination is present, e.g. 151 is compared to 160, 153 is compared to 162 and 155 is compared to 164. (In a lens off condition graphs such as 151, 153 and 155 would be compared to color component graphs such as 166, 168, 170 illustrated in FIGS. 7A-7C such that graphs 5a – 7C show a statistical analysis to determine whether at least one of the one or more contamination damage indicators is present). Regarding claim 15 Robins et al further discloses of applicant’s wherein the step of analysing each area further comprises one or more of: calculating an optical parameter for each area and determining whether each optical parameter is indicative of at least one of the one or more damage indicators (paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image. Then comparing the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on, this means that there was a significant amount of scattered light from the LED's 38, in FIG. 2, and that there is contamination on the first lens element 42 and the microprocessor 60 will provide an indication to the user that there is likely contamination contained on the first lens element 42 such that the step of analysing each area by microprocessor 60 further comprises one or more of: calculating an optical parameter value for each area by comparing the captured image to baseline image with a baseline image to get a value and determining whether each optical parameter value is indicative of at least one of the one or more contamination damage indicators if the comparison value indicates a difference of a predetermined sensed magnitude). Regarding claim 16 Robins et al further discloses of applicant’s wherein the optical parameter comprises one or more of: an intensity (paragraph 0041 – 0046 microprocessor 60 compares the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on, this means that there was a significant amount of scattered light from the LED's 38, in FIG. 2, and that there is contamination on the first lens element 42 where an intensity of the predetermined sensed magnitude is determined), a colour; a wavelength; a luminosity; or a contrast). Regarding claim 28 Robins et al further discloses of applicant’s wherein the image comprises a neutral background (paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image. Then comparing the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on such that the lens cap on the camera 20 provides for a neutral background in the captured image). Regarding claim 30 Robins et al further discloses of applicant’s wherein images from damaged camera components are further classified as resulting from defective or destroyed components (paragraph 0054 the camera lens contamination detection system includes the ability to compensate for the camera getting older as well as for any scratches or other permanent damage which occur to the lens such that captured images from damaged camera components are further classified as resulting from defective scratched lens) or destroyed components. Regarding claim 31 Robins et al further discloses of applicant’s wherein defective components are further classified as scratched (paragraph 0054 the camera lens contamination detection system includes the ability to compensate for the camera getting older as well as for any scratches or other permanent damage which occur to the lens), dented, dislocated, distorted or opaque. Regarding claim 32 Robins et al further discloses of applicant’s wherein the camera component is a camera lens (paragraph 0054 the camera lens contamination detection system includes the ability to compensate for the camera getting older as well as for any scratches or other permanent damage which occur to the lens), window or transparent front element. Regarding claim 37 Robins et al further discloses of applicant’s wherein the light source has a known shape (paragraph 0029, Fig. 2, one or more round LEDs 38a are located at 154 at the periphery of the camera barrel proximate the front surface of the first lens element 42 such that the light source has a known round shape). Regarding claim 38 Robins et al further discloses of applicant’s wherein the light source is arranged outside of a field of view of the camera (paragraph 0029 one or more LEDs 38a are located at 154 at the periphery of the camera barrel proximate the front surface of the first lens element 42 such that the light source 38a is arranged outside of a field of view of the camera 20). Regarding claim 39 Robins et al further discloses of applicant’s wherein the light source is movable such that different images can be taken at different angles of illumination (paragraph 0026 only light emitted from one of the LEDs 38 is shown for illustrative purposes. Light emitted by the LEDs 38, e.g. light rays 51, 53, 55, 57, is directed toward the first lens element 42 (and is shielded from directly reaching the electronic image sensor 52 by the mask member 56. Paragraph 0029 one or more LEDs 38a are located at 154 at the periphery of the camera barrel proximate the front surface of the first lens element 42. LEDs 38a are oriented to direct light at an oblique angle with respect to the front surface of first lens element 42 as best shown in FIG. 2B. In this embodiment, the LEDs 38a are located so close to the perimeter of the first lens element 42 and so far removed from the lens central optical axis, that direct light from the LEDs 38a is prevented from being transmitted through the first lens element 42 to sensor 52. Paragraph 0038 The LED's 38 or 38a are driven by the microprocessor 60 and operate as described above. Either LED configuration or both may be employed such that under the control of microprocessor 60, the light sources 38 and 38a are controlled on and off such that the light form LED 38 and LED 38a is movable, for being on and off, such that different images, for LED 38 being on and LED 38a being on, can be taken at different angles of illumination for each light sources 38 and 38a are controlled on and off and placed in different positions as seen in Fi. 2) and/or the camera is movable. Regarding claim 41 Robins et al further discloses of applicant’s comprising a controller configured to activate said light source when an image is taken (paragraph 0041 the process moves to an activate shutter step 108 in which the digital camera 20 (as controlled by the microprocessor 60) will cause the shutter drive 70 to open the shutter mechanism 50 to allow any light from LEDs 38/38a which is scattered through the lens system by contamination on the first lens element 42 to reach the electronic image sensor 52 such that a controller 60 is configured to activate said light source when an image is taken when using the shutter drive 70). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 17, 33 – 35, and 43 – 47 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robins et al US Publication No. 2003/0193604 in view of Lee et al US Publication No. 2015/0294455. Regarding claim 17 Robins et al discloses or applicant’s comprising calculating an average optical parameter for each area and determining whether each average optical parameter is indicative of at least one of the one or more damage indicators in paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image. Then comparing the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on, this means that there was a significant amount of scattered light from the LED's 38, in FIG. 2, and that there is contamination on the first lens element 42 and the microprocessor 60 will provide an indication to the user that there is likely contamination contained on the first lens element 42 such that calculating an optical parameter for each area is performed when the comparison indicates a difference of a predetermined sensed magnitude and determining whether each optical parameter is indicative of at least one of the one or more contamination damage indicators per the magnitude value indication of the difference comparison); Robins et al a method of determining an indication of contamination using imager data in a captured image but does not expressively disclose calculating an average optical parameter for each area; Lee et al teaches a method of determining the average value of detected signal values of an area using the signal values of the image. Lee et al teaches of Fig. 1 – 12, of applicant’s calculating an average optical parameter for each area (paragraph 0069 the pattern degree of damage evaluation unit 22 evaluates the reliability of the pattern using the signal values of the image detected by the image analyzing unit 21. The pattern degree of damage evaluation unit 22 calculates a standard deviation of the signal value of the image in a number of the divided areas of the pattern, and an average value of detected signal values such that an average optical parameter for the degree of damage of each area is calculated). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the circuitry of Robins et al in a manner similar to Lee et al. Doing so would result improving Robins et al invention in a similar way as Lee et al – namely the ability to provide a method of determining the average value of detected signal values of an area using the signal values of the image, in Lee et al invention, to the method of determining an indication of contamination using imager data in a captured image in Robins et al invention. Regarding claim 33, claim 33 is rejected for being fully encompassed by the reasons found in rejected claim 1 above and where Robins et al in view of Lee et al further teaches the additional claim limitation of applicant's a non-transitory computer-readable medium comprising programming instructions operable by a processor to carry out the following steps (Lee et al in paragraph 0066 the computing system 20 includes a controller configured to perform an operation in accordance with an embedded program such that the controller is a non-transitory computer-readable medium comprising embedded programming instructions operable by the controller processor to carry out the steps of the computing system 20). Regarding claim 34 of the combination of Robins et al in view of Lee et al, Robins et al further reaches of applicant’s further operable by a diagnostic processor when in communication with the camera (paragraph 0039 the operation of a camera lens contamination detection damage system is performed by (paragraph 0034) microprocessor 60 (paragraph 0046) which will provide an indication to the user that there is likely contamination contained on the first lens element 42. Paragraph 0039 in the basic operation of a camera lens contamination detection system, the procedure begins with a lens contamination detection initiation step 100, and then moves to a turn on camera step 102 in which the digital camera 20 is turned on such that the camera lens contamination detection damage system is operable by a diagnostic microprocessor 60 processor when in communication with the camera 20 when the camera 20 is turned on). Regarding claim 35 of the combination of Robins et al in view of Lee et al, Robins et al further reaches of applicant’s further comprising one or more of a light source arranged to provide light incident on the camera component (paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image such that the one or more of a LED 38/38a light sources are arranged to provide light incident on the camera lens 42 component) or; a fibre optic cable arranged to direct light from the light source to the camera component. Regarding claim 43 of the combination of Robins et al in view of Lee et al, Robins et al further reaches of applicant’s further comprising a focal feature for the camera to focus on when taking the image (paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image. Then comparing the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on such that the lens cap is a focal feature for the camera to focus on when taking the captured image to detect contamination). Regarding claim 44 of the combination of Robins et al in view of Lee et al, Robins et al further reaches of applicant’s further comprising a holder and/or robotic arm configured to position the camera for taking the image (paragraph 0018 camera 20 has a camera body 22 in Fig. 1 where (paragraph 0041) microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image such that camera body 22 is a holder (and/or robotic arm) configured to position the camera 20 for taking the captured image). Regarding claim 45 of the combination of Robins et al in view of Lee et al, Robins et al further reaches of applicant’s wherein the camera is provided on a mobile device (paragraph 0018 camera 20 is provided on a mobile device as seen in Fig. 1). Regarding claim 46 of the combination of Robins et al in view of Lee et al, Robins et al further reaches of applicant’s wherein the diagnostic processor further comprises a communication means for communication with the camera (paragraph 0039 the operation of a camera lens contamination detection damage system is performed by (paragraph 0034) microprocessor 60 (paragraph 0046) which will provide an indication to the user that there is likely contamination contained on the first lens element 42. Paragraph 0039 in the basic operation of a camera lens contamination detection system, the procedure begins with a lens contamination detection initiation step 100, and then moves to a turn on camera step 102 in which the digital camera 20 is turned on such that the diagnostic microprocessor 60 further comprises an electrical communication means for communication with the camera 20 when camera 20 is turned on). Regarding claim 47 of the combination of Robins et al in view of Lee et al, Robins et al further reaches of applicant’s wherein multiple light sources are arranged to provide light to the camera component and a controller is configured to turn each individual light source on and off such that one or more of the multiple light sources is active when an image is taken (paragraph 0019 the LEDs 38 are arranged around the inner periphery of the lens barrel 24. At least one, and preferably between three and five LEDs 38 may be used to generate sufficient light to detect contamination on the lens 26. Paragraph 0041 microprocessor 60 will turn on the LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image such that multiple LED 38/38a light sources are arranged to provide light to the camera lens 42 component and a microprocessor 60 controller is configured to turn each individual Led 38/38a light sources on and off such that one or more of the multiple Led 38/38a light sources is active when an image is taken by microprocessor 60). Claim(s) 21 – 24 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robins et al US Publication No. 2003/0193604 in view of Lee et al US Publication No. 2015/0294455 as applied to claim 1 above, and further in view of Fitzgerald et al US Publication No. 2018/0342050. Regarding claim 21 the combination of Robins et al in view of Lee et al, Robins et al further teaches of applicant’s to classify each area as resulting from a damaged or undamaged camera component (paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image. Then comparing the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on, this means that there was a significant amount of scattered light from the LED's 38, in FIG. 2, and that there is contamination on the first lens element 42 and the microprocessor 60 will provide an indication to the user that there is likely contamination contained on the first lens element 42 such that microprocessor 60 classifies each area as resulting from a contamination damaged or undamaged camera component); or to classify each area as comprising none of the one or more damage indicators or at least one of the one or more damage indicators; The combination of Robins et al in view of Lee et al teaches a method of determining an indication of contamination using imager data in a captured image and a method of determining the average value of detected signal values of an area using the signal values of the image but do not expressively teach wherein the step of analysing each area further comprises using a trained machine learning algorithm; Fitzgerald et al teaches a method of using a neural network analysis to identify other kinds of defect associated with a mobile device. Fitzgerald et al teaches of Fig. 1 – 25, of applicant’s a method of wherein the step of analysing each area further comprises using a trained machine learning algorithm (paragraph 0113 using a neural network analysis approach, systems and a methods are provided to identify other kinds of defect associated with a mobile device through image analysis such that a neural network analysis processing approach uses a step of analysing each area further comprises using a trained machine learning algorithm). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the circuitry of Robins et al in a manner similar to Fitzgerald et al. Doing so would result improving Robins et al invention in a similar way as Fitzgerald et al – namely the ability to provide a method of using a neural network analysis to identify other kinds of defect associated with a mobile device, in Fitzgerald et al invention, to the method of determining an indication of contamination using imager data in a captured image in Robins et al invention. Regarding claim 22 of the combination of Robins et al in view of Lee et al further in view of Fitzgerald et al further teaches of applicant’s wherein the step of analysing each area comprises using a trained machine learning algorithm to classify each area as resulting from a damaged or undamaged camera component (Robins et al in paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image. Then comparing the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on, this means that there was a significant amount of scattered light from the LED's 38, in FIG. 2, and that there is contamination on the first lens element 42 and the microprocessor 60 will provide an indication to the user that there is likely contamination contained on the first lens element 42. Fitzgerald et al paragraph 0113 using a neural network analysis approach, systems and a methods are provided to identify other kinds of defect associated with a mobile device through image analysis such that a neural network analysis processing approach performs the step of analysing each area comprises using a trained machine learning algorithm to classify each area as resulting from a damaged or undamaged camera contamination component). Regarding claim 23 of the combination of Robins et al in view of Lee et al further in view of Fitzgerald et al, Fitzgerald et al further teaches of applicant’s wherein the machine learning algorithm comprises a neural network (paragraph 0113 using a neural network analysis approach, systems and a methods are provided to identify other kinds of defect associated with a mobile device through image analysis). Regarding claim 24 of the combination of Robins et al in view of Lee et al further in view of Fitzgerald et al, Fitzgerald et al further teaches of applicant’s wherein the machine learning algorithm comprises a deep learning algorithm (paragraph 0113 using a neural network analysis approach, systems and a methods are provided to identify other kinds of defect associated with a mobile device through image analysis to perform a deep learning algorithm). Regarding claim 26 of the combination of Robins et al in view of Lee et al further in view of Fitzgerald et al further teaches of applicant’s further comprising training the machine learning algorithm by providing multiple examples of images from damaged and undamaged camera components (Robins et al in paragraph 0039 the operation of a camera lens contamination detection system is performed. Paragraph 0041 – 0046 microprocessor 60 turns on LEDs 38/38a to cause them to illuminate the first lens element 42 such that the scattered through the lens system by contamination on the first lens element 42 reaches the electronic image sensor 52 that captures an image. Then comparing the captured image to baseline image with a baseline image is performed and the comparison indicates a difference of a predetermined sensed magnitude, when the lens cap is on, this means that there was a significant amount of scattered light from the LED's 38, in FIG. 2, and that there is contamination on the first lens element 42 and the microprocessor 60 will provide an indication to the user that there is likely contamination contained on the first lens element 42. Fitzgerald et al in paragraph 0113 using a neural network analysis approach, systems and a methods are provided to identify other kinds of defect associated with a mobile device through image analysis where (paragraph 0056) neural network training is performed from network training data and network inference processes to identify defects using the trained neural network such that the neural network is a training machine learning algorithm that identify other kinds of defect associated with a mobile device where multiple examples of captured images from contamination damaged and undamaged camera components is provided each time, in Robins et al invention, that the camera lens contamination detection system is performed). Claim(s) 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robins et al US Publication No. 2003/0193604 in view of Lee et al US Publication No. 2015/0294455 as applied to claim 35 above, and further in view of Mutti et al US Publication No. 2016/0150213. Regarding claim 26 of the combination of Robins et al in view of Lee et al, Robins et al further teaches of applicant’s wherein the light source is a light source (paragraph 0019 The LEDs 38 are arranged around the inner periphery of the lens barrel 24 but any suitable light source may be used); Robins et al in view of Lee et al teach a method of determining an indication of contamination using imager data in a captured image using an LED light and a method of determining the average value of detected signal values of an area using the signal values of the image but do not expressively teach a white light source; Mutti et al teaches using white light source LEDs. Mutti et al teaches of Fig. 1 – 11 of applicant’s a white light source (paragraph 0071 white light source LEDs). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify the circuitry of Robins et al in a manner similar to Mutti et al. Doing so would result improving Robins et al invention in a similar way as Mutti et al – namely the ability to provide a method of using a white light LED, in Mutti et al invention, to the method of determining an indication of contamination using imager data in a captured image using LED’s in Robins et al invention. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK T MONK whose telephone number is (571)270-7454. The examiner can normally be reached Monday thru Friday 8am to 4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 571-272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARK T MONK/Primary Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Oct 07, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604117
IMAGE ELEMENT READOUT CIRCUITRY, IMAGE ELEMENT, AND IMAGE ELEMENT READOUT METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12598396
RANDOM NUMBER GENERATOR FOR MULTICHANNEL IMAGE SENSING DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593145
IMAGING ELEMENT, CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12580580
ANALOG-TO-DIGITAL CONVERTER FOR SIGNAL SAMPLING
2y 5m to grant Granted Mar 17, 2026
Patent 12563317
IMAGING ELEMENT AND IMAGING DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
96%
With Interview (+20.2%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 588 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month