Prosecution Insights
Last updated: April 18, 2026
Application No. 18/990,344

SYSTEMS AND METHODS FOR TUNING SYMBOL READERS

Non-Final OA §102§103
Filed
Dec 20, 2024
Examiner
LY, TOAN C
Art Unit
2876
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Cognex Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
384 granted / 488 resolved
+10.7% vs TC avg
Strong +20% interview lift
Without
With
+20.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
9 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 488 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/25/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 12 is objected to because of the following informalities: Claim 12 lacks a period at the end of the claim. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 8-14 and 18-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kuchenbrod (US 20230007185 A1). Regarding claim 1, Kuchenbrod discloses (Fig. 4 & 6) a method of detecting and correcting focus drift of variable focus lens for fixed focus applications comprising: receiving, from an imaging device associated with a set of attributes (¶42 - imaging reader 106 obtains scanning parameters; ¶44 - obtaining the scanning parameters may include obtaining calibration parameters. The calibration parameters may include one or more defocus parameters), an image of an object that is at least partially within a field-of-view (FOV) of the imaging device, wherein the imaging device captured the image according to the set of attributes (¶45 - The variable focus imaging controller 214 may assess the obtained scanning parameters and control the image sensor, variable focus optical element, and other components of the imaging reader 106 according to the scanning parameters to obtain the image); in response to a presence of a symbol within the image, generating a region of interest (ROI) of the symbol (¶45 - The processor of the imaging reader 106, at 406, identifies a region of interest in the captured image. The region of interest in the image may include a barcode); generating a quality metric for the image (¶46 - At 408, the processor of the imaging reader 106 analyzes the image and determines an image quality of the image; ¶46 - a blur value), wherein the quality metric indicates a measurement that the ROI of the symbol can be decoded (¶56 – The near and far focus limits are predetermined minimum focal plane and maximum focal plane distance thresholds that are limits of the focus of the imaging reader 106, outside of which the imaging and decoding of indicia of the imaging reader degrades); and adjusting the set of attributes of the imaging device based, at least in part, on the quality metric for the image (¶44 - defocus parameters; ¶68 - The processor may then provide the retrieved focus drift and/or tuning amount to the variable focus imaging controller 214, the actuator 215, the variable focus optical element 208, or other elements of the imaging reader 106 to tune the focus of the imaging reader 106 according to the previously determined focus drift). Regarding claim 8, Kuchenbrod discloses the method of claim 1 above and further discloses receiving a second image of the object from the imaging device captured according to the adjusted set of attributes (¶41 – goods are conveyed; i.e. further goods will be imaged). Regarding claim 9, Kuchenbrod discloses the method of claim 1 above and further discloses determining whether the generated quality metric for the image satisfies predetermined criteria before adjusting the set of attributes of the imaging device (¶65 – within a threshold). Regarding claim 10, Kuchenbrod discloses the method of claim 9 above and further discloses when it is determined that the quality metric for the image does not satisfy the predetermined criteria, proceeding with adjusting the set of attributes of the imaging device based, at least in part, on quality metric for the image; and when it is determined that the quality metric for the image satisfies the predetermined criteria, stopping adjusting the set of attributes of the imaging device (¶65 – within a threshold). Regarding claim 11, Kuchenbrod discloses the method of claim 1 above and further discloses wherein the set of attributes comprises at least one of: focus (¶42), exposure (¶42), status of light banks (¶30 - a signal-controlled light source, such as a light source trigged by an object detection system), or image filtering (¶46). Regarding claim 12, Kuchenbrod discloses: an imaging device associated with a set of attributes (¶42 - imaging reader 106 obtains scanning parameters; ¶44 - obtaining the scanning parameters may include obtaining calibration parameters. The calibration parameters may include one or more defocus parameters) and configured to capture images according to the set of attributes (¶45 - The variable focus imaging controller 214 may assess the obtained scanning parameters and control the image sensor, variable focus optical element, and other components of the imaging reader 106 according to the scanning parameters to obtain the image); and at least one processor configured to execute computer executable instructions (¶27-¶28, ¶72), wherein the computer executable instructions comprise instructions for: receiving, from the imaging device, an image of an object that is at least partially within a field-of-view (FOV) of the imaging device (¶45); in response to a presence of a symbol within the image, generating a region of interest (ROI) of the symbol (¶45 - The processor of the imaging reader 106, at 406, identifies a region of interest in the captured image. The region of interest in the image may include a barcode); generating a quality metric for the image (¶46 - At 408, the processor of the imaging reader 106 analyzes the image and determines an image quality of the image; ¶46 - a blur value), wherein the quality metric indicates a measurement that the ROI of the symbol can be decoded (¶56 – The near and far focus limits are predetermined minimum focal plane and maximum focal plane distance thresholds that are limits of the focus of the imaging reader 106, outside of which the imaging and decoding of indicia of the imaging reader degrades); and adjusting the set of attributes of the imaging device based, at least in part, on the quality metric for the image (¶44 - defocus parameters; ¶68 - The processor may then provide the retrieved focus drift and/or tuning amount to the variable focus imaging controller 214, the actuator 215, the variable focus optical element 208, or other elements of the imaging reader 106 to tune the focus of the imaging reader 106 according to the previously determined focus drift). Regarding claim 13, Kuchenbrod discloses the product of claim 12 above and further discloses wherein: one or more steps of the instructions run in parallel on respective processors of the at least one processor (¶27-¶28, ¶72). Regarding claim 14, Kuchenbrod discloses the product of claim 12 above and further discloses wherein each of the at least one processor is a CPU (¶27-¶28, ¶72). Regarding claim 18, Kuchenbrod discloses the product of claim 12 above and further discloses wherein: the set of attributes comprises at least one of: focus (¶42), exposure (¶42), status of light banks (¶30 - a signal-controlled light source, such as a light source trigged by an object detection system), or image filtering (¶46); and the computer executable instructions comprise instructions for receiving a second image of the object from the imaging device captured according to the adjusted set of attributes (¶41 – goods are conveyed; i.e. further goods will be imaged). Regarding claim 19, Kuchenbrod discloses the product of claim 12 above and further discloses wherein the computer executable instructions comprise instructions for: determining whether the generated quality metric for the image satisfies predetermined criteria before adjusting the set of attributes of the imaging device; when it is determined that the quality metric for the image does not satisfy the predetermined criteria, proceeding with adjusting the set of attributes of the imaging device based, at least in part, on quality metric for the image; and when it is determined that the quality metric for the image satisfies the predetermined criteria, stopping adjusting the set of attributes of the imaging device (¶65 – within a threshold). Regarding claim 20, Kuchenbrod discloses: A non-transitory computer readable medium storing computer executable instructions configured to, when executed by at least one processor (¶27-¶28, ¶72), cause the at least one processor to: receive, from an imaging device associated with a set of attributes (¶42 - imaging reader 106 obtains scanning parameters; ¶44 - obtaining the scanning parameters may include obtaining calibration parameters. The calibration parameters may include one or more defocus parameters), an image of an object that is at least partially within a field-of-view (FOV) of the imaging device, wherein the imaging device captured the image according to the set of attributes (¶45 - The variable focus imaging controller 214 may assess the obtained scanning parameters and control the image sensor, variable focus optical element, and other components of the imaging reader 106 according to the scanning parameters to obtain the image); in response to a presence of a symbol within the image, generate a region of interest (ROI) of the symbol (¶45 - The processor of the imaging reader 106, at 406, identifies a region of interest in the captured image. The region of interest in the image may include a barcode); generate a quality metric for the image, wherein the quality metric indicates a measurement that the ROI of the symbol can be decoded (¶46 - At 408, the processor of the imaging reader 106 analyzes the image and determines an image quality of the image; ¶46 - a blur value), wherein the quality metric indicates a measurement that the ROI of the symbol can be decoded (¶56 – The near and far focus limits are predetermined minimum focal plane and maximum focal plane distance thresholds that are limits of the focus of the imaging reader 106, outside of which the imaging and decoding of indicia of the imaging reader degrades); and adjust the set of attributes of the imaging device based, at least in part, on the quality metric for the image (¶44 - defocus parameters; ¶68 - The processor may then provide the retrieved focus drift and/or tuning amount to the variable focus imaging controller 214, the actuator 215, the variable focus optical element 208, or other elements of the imaging reader 106 to tune the focus of the imaging reader 106 according to the previously determined focus drift). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2-7 and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kuchenbrod in view of Zagaynov et al. (US 20230367983 A1). Regarding claim 2, Kuchenbrod discloses the method of claim 1 above but is silent regarding generating, using a pre-trained deep learning model, a plurality of candidate ROIs; and selecting the ROI of the symbol from the plurality of candidate ROIs. Zagaynov teaches (Fig. 1) a decoding of two-dimensional barcodes under unfavorable conditions comprising: Classification probabilities generated by the neural network model may then be used to select the most likely symbols and to decode the data contained in the barcode image (¶39). FIG. 11 illustrates mapping of a set of candidate locations 1100 of modules, as may be output by a neural network (¶98). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to provide the imaging reader of Kuchenbrod with the neural network of Zagaynov by utilizing a neural network model to determine and select ROIs in order to analyze and decode barcodes by using a known technique to improve similar devices in the same way. Regarding claim 3, Kuchenbrod discloses the method of claim 1 above and further discloses wherein generating the quality metric for the image comprises: extracting a plurality of features from at least one of the image or the ROI of the symbol; and generating the quality metric based on the plurality of features (¶46-¶47). Kuchenbrod is silent regarding the generating by using a machine learning model. Zagaynov teaches: training engine 152 may train NN model(s) 114a that include multiple neurons to perform barcode detection and decoding (¶55). After NN model(s) 114a are trained, the set of NN model(s) 114a may be provided to computing device 110 for inference analysis of new barcode images (¶56). Barcode identification and preprocessing 210 may further include enhancing quality of barcode image 202 (e.g., de-blurring, filtering, sharpening, etc.) and identifying main directions of the barcode (¶58). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to provide the imaging reader of Kuchenbrod with the neural network of Zagaynov by utilizing a neural network model to determine and select ROIs in order to analyze and decode barcodes by using a known technique to improve similar devices in the same way. Regarding claim 4, Kuchenbrod modified by Zagaynov teaches the method of claim 3 and further teaches wherein extracting the plurality of features comprises at least one of: extracting a set of global features from the image, extracting a set of regional features from the ROI of the symbol, or extracting a set of decoder features from a decode result of the ROI of the symbol (Zagaynov: ¶35-¶36, ¶88). Regarding claim 5, Kuchenbrod modified by Zagaynov teaches the method of claim 4 and further teaches wherein the set of global features comprises a standard deviation of the image (Zagaynov: ¶67). Regarding claim 6, Kuchenbrod modified by Zagaynov teaches the method of claim 4 and further teaches wherein the set of regional features comprises a standard deviation of the ROI of the symbol (Zagaynov: ¶67). Regarding claim 7, Kuchenbrod modified by Zagaynov teaches the method of claim 4 and further teaches wherein the set of decoder features comprises at least one of: a background uniformity (Zagaynov: ¶66) or a quiet zone matching (Zagaynov: ¶35, ¶63-¶64). Regarding claim 15, Kuchenbrod discloses the product of claim 12 above but is silent regarding wherein the computer executable instructions further comprise instructions for: generating, using a pre-trained deep learning model, a plurality of candidate ROIs; and selecting the ROI of the symbol from the plurality of candidate ROIs. Zagaynov teaches: Classification probabilities generated by the neural network model may then be used to select the most likely symbols and to decode the data contained in the barcode image (¶39). FIG. 11 illustrates mapping of a set of candidate locations 1100 of modules, as may be output by a neural network (¶98). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to provide the imaging reader of Kuchenbrod with the neural network of Zagaynov by utilizing a neural network model to determine and select ROIs in order to analyze and decode barcodes by using a known technique to improve similar devices in the same way. Regarding claim 16, Kuchenbrod discloses the product of claim 12 above and further discloses wherein generating the quality metric for the image comprises: extracting a plurality of features from at least one of the image or the ROI of the symbol; and generating, using a machine learning model, the quality metric based on the plurality of features (¶46-¶47). Kuchenbrod is silent regarding the generating by using a machine learning model. Zagaynov teaches: training engine 152 may train NN model(s) 114a that include multiple neurons to perform barcode detection and decoding (¶55). After NN model(s) 114a are trained, the set of NN model(s) 114a may be provided to computing device 110 for inference analysis of new barcode images (¶56). Barcode identification and preprocessing 210 may further include enhancing quality of barcode image 202 (e.g., de-blurring, filtering, sharpening, etc.) and identifying main directions of the barcode (¶58). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to provide the imaging reader of Kuchenbrod with the neural network of Zagaynov by utilizing a neural network model to determine and select ROIs in order to analyze and decode barcodes by using a known technique to improve similar devices in the same way. Regarding claim 17, Kuchenbrod modified by Zagaynov teaches the product of claim 16 and further teaches wherein extracting the plurality of features comprises at least one of: extracting a set of global features from the image, extracting a set of regional features from the ROI of the symbol, or extracting a set of decoder features from a decode result of the ROI of the symbol (Zagaynov: ¶35-¶36, ¶88). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOAN C LY whose telephone number is (571)270-7898. The examiner can normally be reached Monday - Friday, 8AM-4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steven Paik can be reached at 571-272-2404. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TOAN C LY/ Primary Examiner, Art Unit 2876
Read full office action

Prosecution Timeline

Dec 20, 2024
Application Filed
Apr 02, 2025
Response after Non-Final Action
Dec 10, 2025
Non-Final Rejection — §102, §103
Mar 10, 2026
Interview Requested
Mar 20, 2026
Applicant Interview (Telephonic)
Mar 20, 2026
Examiner Interview Summary
Mar 30, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596277
DISPLAY PANEL, DISPLAY DEVICE, AND METHOD FOR MANUFACTURING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12579617
Detection of objects of a moving object stream
2y 5m to grant Granted Mar 17, 2026
Patent 12579403
ANTENNA PATTERN AND RFID INLAY
2y 5m to grant Granted Mar 17, 2026
Patent 12547859
METHOD AND SYSTEM THAT PROVIDES ACCESS TO CUSTOM AND INTERACTIVE CONTENT FROM AN OPTICAL CODE
2y 5m to grant Granted Feb 10, 2026
Patent 12546455
OPTIC FOR BACKLIGHT CONTROL
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+20.2%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 488 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month