Prosecution Insights
Last updated: April 19, 2026
Application No. 18/621,091

Method to Use Edge Computing to Detect Non-Payload Encoding Visual Features for Optical Character Recognition

Non-Final OA §103
Filed
Mar 28, 2024
Examiner
KOETH, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Zebra Technologies Corporation
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
331 granted / 429 resolved
+15.2% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
463
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 429 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1–5, 8–17 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kundig et al., US Patent No. 11,495,036 B1 (herein “Kundig”) in view of Hagen et al., US Patent Application Publication No. US 2023/0232108 A1 (herein “Hagen”). Regarding claims 1, 10 and 23 with deficiencies of Kundig noted in square brackets [], substantive differences between the claims noted in curly brackets, and claim 1 as illustrative, Kundig teaches {an imaging device, comprising: an imaging assembly configured to capture image data of an object appearing in a field of view (FOV); one or more processors; and one or more computer-readable media storing machine readable instructions that, when executed, cause the one or more processors to: - claim 1 / An imaging system, comprising: an imaging assembly configured to capture image data of an object appearing in a field of view (FOV); and one or more computer-readable media storing machine readable instructions that, when executed, cause the imaging system to: - claim 10 / A method in an imaging system including an imaging assembly configured to capture image data of an object appearing in a field of view (FOV) and an [edge]-computing module, the method comprising: - claim 23} (Kundig col. 1, ll. 49–63, an apparatus for optical recognition including a camera (imaging assembly) and one or more processors configured to execute a method that detects a location within the image (field of view) having characters (object), where col. 21, ll. 28–41 teaches that implementation of disclosed techniques and means done with a hardware/software combination including sets of instructions on a computer-readable medium) capture, using the imaging assembly, the image data of the object appearing in the FOV (Kundig fig. 3, col. 8, ll. 50–55, a camera acquires an image of a barcode and the image is received for processing); attempt to decode the image data of the object (Kundig figs. 2 and 3, col. 8, ll. 54–59, system attempts to decode the barcode in the image); responsive to an unsuccessful attempt to decode the image data, detect a non-payload encoding visual feature (Kundig fig. 2, col. 8, l. 60 – col. 9, l. 10, when at step 308 the system fails at decoding the barcode in the image, then the image is divided to obtain segment 220 (shown in fig. 2 to be text) based on a location or size of the barcode and an algorithm is run on the segment to detect an alphanumeric code (non-payload encoding visual feature)); and responsive to detecting the non-payload encoding visual feature, transmit, to an [edge- computing module], a request for an optical character recognition (OCR) operation to be performed for the object (Kundig col. 9, ll. 6–18, if the alphanumeric code is detected (responsive to) then an OCR algorithm is run (where the instantiation of the OCR algorithm is considered a request) to the image segment and a character string is generated from the segment with the alphanumeric code, also col. 21, ll. 20–27 teaching that processing segments can be coupled to each other by way of transmitting information, data, arguments, parameters and memory contents, therefore the OCR processing being by way of a transmitted request, and col. 22, ll. 47–54 teaches that the processes are distributed among multiple devices). While Kundig teaches that responsive to the barcode not being decoded, an OCR process is executed against text appearing with the barcode, Kundig does not explicitly teach that the OCR process is on an edge-computing module, and also Kundig does not explicitly teach that it’s system includes an edge-computing module specifically as one of the processors. Hagen teaches an edge-computing module that has transmitted to it requests for an OCR operation (Hagen ¶ 48, a portion of the image with the region of interest is transmitted to the edge computing device, where ¶ 51 teaches the edge computing device performs OCR techniques on the portion of image). Therefore, taking the teachings of Kundig and Hagen together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the image processing of Kundig to include OCR with an edge-computing device as disclosed by Hagen at least because doing so would result in increased efficiency of memory use and computing power, faster identification of objects, and avoid clogging network bandwidth. Hagen ¶ 19. Regarding claims 2 and 14, Kundig teaches wherein the non-payload encoding visual feature includes a human face (Kundig col. 17, ll. 20-24, facial recognition used as an anchor for OCR, where the location of segments to be OCRd (non-payload encoding visual feature) are determined by an identified face position). Regarding claims 3 and 15, Kundig teaches wherein the non-payload encoding visual feature includes a non-payload encoding indicia (Kundig col. 17, ll. 20-24, facial recognition used as an anchor for OCR, where the location of segments to be OCRd (non-payload encoding visual feature) are determined by an identified face (non-payload encoding indicia) position – the face indicating the non-payload encoding because it is used as an anchor for the segment to be OCRd). Regarding claims 4 and 16, with claim 4 as exemplary, while Kundig teaches wherein the image data is first image data (Kundig fig. 3, col. 8, ll. 50–55, an image of a barcode) and the one or more computer-readable media stores additional machine readable instructions that, when executed, cause the one or more processors to (Kundig col. 1, ll. 49–63, one or more processors configured to execute a method where col. 21, ll. 28–41 teaches that implementation of disclosed techniques and means done with a hardware/software combination including sets of instructions on a computer-readable medium), Kundig does not explicitly teach the remainder of claims 4 and 16, where Hagen teaches responsive to detecting the non-payload encoding visual feature, capture, using the imaging assembly, second image data of the object appearing in the FOV (Hagen ¶¶ 43, 47 and 50, when the barcode not detected within a detected field of reference (FOR) (the non-payload encoding visual feature), then the camera is moved to an updated location relative to the x and y coordinates defining the FOR, and steps B through H are repeated, including step F of capturing another image (second image data) of the FOV); wherein the request for the OCR operation to be performed {includes – claim 4}/{is based on – claim 16} the second image data of the object (Hagen ¶51, step H (after step F) is to perform OCR techniques on the captured image of the product (object) in the image data). Therefore, taking the teachings of Kundig and Hagen together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing of Kundig to include taking additional images and performing OCR requests thereto as disclosed by Hagen at least because doing so would result in increased efficiency of memory use and computing power, faster identification of objects, and avoid clogging network bandwidth. Hagen ¶ 19. Regarding claims 5 and 17, Kundig teaches wherein detecting the non-payload encoding visual feature is initiated automatically responsive to the unsuccessful attempt (Kundig fig. 3, col. 8, l. 60 – col. 9, l. 14, system is able to ascertain that the barcode failed to decode, and thus moves to step 312 next (automatically responsive), which divides the image to obtain the region the barcode is located to determine a section (non-payload encoding visual feature) to attempt to OCR). Regarding claims 8 and 12, with claim 8 as exemplary, Kundig teaches further comprising: a housing disposed to house: the imaging assembly; {the one or more processors; - claim 8 / the computing device – claim 12} and the one or more computer-readable media (Kundig fig. 1, col. 4, 1. 63 – col. 5, l. 3, col. 20, ll. 17–60, as shown in fig. 1, a system as a tablet mobile device (fig. 1 showing the housing), having a camera and a memory device, the system performing the decoding, and where col. 20, ll. 17–53 disclose the tablet computer including processors and a non-transitory storage medium). Regarding claim 9, while Kundig teaches the housing as part of a tablet computer which could itself suggest an “edge-computing module” by a broadest reasonable interpretation of “edge-computing” to mean a computer node on the edge of a network, closest to sensor controls and users (Kundig fig. 1, col. 4, 1. 63 – col. 5, l. 3, col. 20, ll. 17–60, as shown in fig. 1, a system as a tablet mobile device (fig. 1 showing the housing)), nonetheless, Kundig does not explicitly teach an “edge-computing module.” However, Hagen teaches an edge-computing module (Hagen ¶ 48, a portion of the image with the region of interest is transmitted to the edge computing device). Therefore, taking the teachings of Kundig and Hagen together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing computing device of Kundig to include an edge-computing device as disclosed by Hagen at least because doing so would result in increased efficiency of memory use and computing power, faster identification of objects, and avoid clogging network bandwidth. Hagen ¶ 19. Regarding claim 11, with deficiencies of Kundig noted in square brackets [], Kundig teaches further comprising: an imaging device including the imaging assembly and the one or more computer-readable media; and a computing device [including the edge-computing module], the computing device communicatively coupled to the imaging device (Kundig fig. 1, col. 4, 1. 63 – col. 5, l. 3, col. 20, ll. 17–60, as shown in fig. 1, a system as a tablet mobile device (fig. 1 showing the housing), having a camera, a memory device and a communications connection providing the image, the system performing the decoding, and where col. 20, ll. 17–53 disclose the tablet computer including processors and a non-transitory storage medium). Kundig does not explicitly teach where Hagen teaches including the edge-computing module (Hagen ¶ 48, a portion of the image with the region of interest is transmitted to the edge computing device). Therefore, taking the teachings of Kundig and Hagen together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing computing device of Kundig to include an edge-computing device as disclosed by Hagen at least because doing so would result in increased efficiency of memory use and computing power, faster identification of objects, and avoid clogging network bandwidth. Hagen ¶ 19. Regarding claim 13, Kundig teaches further comprising: an imaging device including: the imaging assembly; the one or more computer-readable media (Kundig fig. 1, col. 4, 1. 63 – col. 5, l. 3, col. 20, ll. 17–60, as shown in fig. 1, a system as a tablet mobile device (fig. 1 showing the housing), having a camera and a memory device, the system performing the decoding, and where col. 20, ll. 17–53 disclose the tablet computer including processors and a non-transitory storage medium). Kundig does not explicitly teach where Hagen teaches and the edge-computing module (Hagen ¶ 48, a portion of the image with the region of interest is transmitted to the edge computing device). Therefore, taking the teachings of Kundig and Hagen together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing computing device of Kundig to include an edge-computing device as disclosed by Hagen at least because doing so would result in increased efficiency of memory use and computing power, faster identification of objects, and avoid clogging network bandwidth. Hagen ¶ 19. Claims 6–7, and 18–19 are rejected under 35 U.S.C. 103 as being unpatentable over Kundig in view of Hagen, and further in view of Horner et al., US Patent Application Publication No. US 2023/0042611 A1, herein (“Horner”). Regarding claims 6 and 18, with claim 6 as exemplary, Kundig as modified by Hagen does not teach, where Horner teaches wherein detecting the non-payload encoding visual feature includes: detecting the non-payload encoding visual feature using a trained algorithm (Horner ¶¶ 53, 56, smart imaging application and the OCR performance enhancement application (which identify an indicia and character string in an image) comprises a machine learning-based model (trained algorithm)). Therefore, taking the teachings of Kundig as modified by Hagen and Horner together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing of Kundig to include using a trained algorithm in image processing as disclosed by Horner at least because doing so would result in enhanced OCR performance. Horner ¶ 4. Regarding claims 7 and 19, with claim 7 as exemplary, Kundig as modified by Hagen does not teach, where Horner teaches wherein the one or more computer-readable media stores additional machine readable instructions that, when executed, cause the one or more processors to (Hagen ¶¶ 73–74, implementations of the disclosed processes including a machine readable medium executing instructions): generate the trained algorithm by training an algorithm to detect a non-payload encoding visual feature (Hagen ¶¶ 28, 33 training of an OCR system using barcode data which includes a character string (non-payload encoding visual feature) proximate to the barcode). Therefore, taking the teachings of Kundig as modified by Hagen and Horner together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing of Kundig to include using a trained algorithm in image processing as disclosed by Horner at least because doing so would result in enhanced OCR performance. Horner ¶ 4. Claims 20–22 are rejected under 35 U.S.C. 103 as being unpatentable over Kundig in view of Hagen, and further in view of Hull et al., US Patent Application Publication No. US 2007/0052997 A1, herein (“Hull”). Regarding claim 20, while Kundig as modified by Hagen teaches performing the OCR operation (Kundig col. 9, ll. 6–18, if the alphanumeric code is detected (responsive to) then an OCR algorithm is run) includes analyzing the text to extract information associated with the object, and text associated with the object (Kundig col. 9, ll. 6–31, text near a barcode is analyzed to determine a VIN or other metadata describing the barcode/object). Kundig as modified by Hagen does not explicitly teach, where Hull teaches detecting one or more fonts for text (Hull ¶¶ 220–221, 250, a font detection algorithm is used in OCR functionality to extract text from a section of a document). Therefore, taking the teachings of Kundig as modified by Hagen and Hull together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing of Kundig to include font detection as disclosed by Hull at least because doing so would provide a way to retrieve printed text into a dynamic medium, providing an entry point to electronic content or services of interest to the user, from physical, paper-based content. See Hull ¶¶ 89, 84. Regarding claim 21, Kundig as modified by Hagen teaches wherein the one or more computer-readable media stores additional machine readable instructions that, when executed, cause the imaging system to (Kundig col. 1, ll. 49–63, one or more processors configured to execute a method where col. 21, ll. 28–41 teaches that implementation of disclosed techniques and means done with a hardware/software combination including sets of instructions on a computer-readable medium). Kundig does not, where Hull teaches pre-populate one or more information fields of a form associated with the object or a user related to the object (Hull ¶¶ 8, 162, form is filled out automatically with previously entered information that is captured via the mixed media reality system). Therefore, taking the teachings of Kundig as modified by Hagen and Hull together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing of Kundig to include font detection as disclosed by Hull at least because doing so would provide a way to retrieve printed text into a dynamic medium, providing an entry point to electronic content or services of interest to the user, from physical, paper-based content. See Hull ¶¶ 89, 84. Regarding claim 22, Kundig as modified by Hagen does not explicitly teach, where Hull teaches wherein the analyzing the text is performed via a neural network (Hull ¶ 250, font detection algorithm and character recognition technique (analyzing the text) via feature extraction includes a neural network). Therefore, taking the teachings of Kundig as modified by Hagen and Hull together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the image processing of Kundig to include text analysis including a neural network as disclosed by Hull at least because doing so would provide a way to retrieve printed text into a dynamic medium, providing an entry point to electronic content or services of interest to the user, from physical, paper-based content. See Hull ¶¶ 89, 84. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Gupta et al., “SmartIdOCR: Automatic Detection and Recognition of Identity card number using Deep Networks,” 2021 Sixth International Conference on Image Information Processing (ICIIP), pp. 267–272, directed towards image processing on ID cards including barcodes and face images. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M KOETH whose telephone number is (571)272-5908. The examiner can normally be reached Monday-Thursday, 09:00-17:00, Friday 09:00-13:00, EDT/EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MICHELLE M. KOETH Primary Examiner Art Unit 2671 /MICHELLE M KOETH/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586221
METHOD AND APPARATUS FOR ESTIMATING DEPTH INFORMATION OF IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12579651
IMPEDED DIFFUSION FRACTION FOR QUANTITATIVE IMAGING DIAGNOSTIC ASSAY
2y 5m to grant Granted Mar 17, 2026
Patent 12567241
Method For Generating Training Data Used To Learn Machine Learning Model, System, And Non-Transitory Computer-Readable Storage Medium Storing Computer Program
2y 5m to grant Granted Mar 03, 2026
Patent 12567177
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE PROCESSING
2y 5m to grant Granted Mar 03, 2026
Patent 12566493
METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
94%
With Interview (+16.7%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 429 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month