Prosecution Insights
Last updated: April 19, 2026
Application No. 17/958,188

RANKED ADAPTIVE ROI FOR VISION CAMERAS

Final Rejection §101§103
Filed
Sep 30, 2022
Examiner
RODRIGUEZ, ANTHONY JASON
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Zebra Technologies Corporation
OA Round
2 (Final)
17%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
-5%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
3 granted / 18 resolved
-45.3% vs TC avg
Minimal -21% lift
Without
With
+-21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
47 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed, see Remarks pages 1-4, filed 11/17/2025, with respect to the rejection of amended claim 1 under 35 U.S.C. 101 have been fully considered but they are not persuasive. On page 1-4 of Remarks, Applicant argues: PNG media_image1.png 809 786 media_image1.png Greyscale Examiner respectfully disagrees. Applicant argues that the amended claim 1 overcomes the previously recited rejection of claim 1 under 35 U.S.C. 101 through the amended limitations of “wherein each of the first and the second ROIs are less than the entire FOV,” “generating a recurrence frequency for each of the first and second ROIs,” and “storing a default ranked set of ROIs based on the respective recurrence frequencies,” which are argued by the Applicant to integrate the judicial exception into a practical application and/or amount to significantly more than the judicial exception. MPEP § 2106.05(a) recites: “An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107. In this respect, the improvement consideration overlaps with other considerations, specifically the particular machine consideration (see MPEP § 2106.05(b)), and the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a claim satisfies the improvement consideration. It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below. In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception. See MPEP § 2106.04(d) (discussing Finjan, Inc. v. Blue Coat Sys., Inc., 879 F.3d 1299, 1303-04, 125 USPQ2d 1282, 1285-87 (Fed. Cir. 2018)). Thus, it is important for examiners to analyze the claim as a whole when determining whether the claim provides an improvement to the functioning of computers or an improvement to other technology or technical field.” As is disclosed by MPEP § 2106.05(a), when analyzing whether a claim discloses an improvement, the “the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.” As is disclosed below and in the previous Office Action, the additional elements identified to be present within the claim are: “…machine vision system including a computing device for executing an application and a fixed imaging device communicatively coupled to the computing device”; wherein the claim merely invokes computers or other machinery merely as a tool to perform an existing process. The additional elements fail to integrate a judicial exception into a practical application or add significantly more to the abstract idea because they simply involve applying the abstract idea on a machine vision system without any recitation of details of how to carry out the abstract ideas related to visual feature detection and ROI ranking, such as how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. “…capturing, via the fixed imaging device, a first image over a field of view (FOV),” “…capturing, via the fixed imaging device, a second image,” and “…capturing, via the fixed imaging device, a third image”; Wherein the additional elements relating to the capturing of the images are insignificant pre-solution activities to the judicial exception, and thus fail to integrate the judicial exception into a practical application due to the limitations amounting to mere data gathering, which does not meaningfully limit the processes of visual feature detection and ROI ranking. “…responsive to detecting the visual feature in the third image, transmitting data associated with the visual feature in the third image to a host processor”; Wherein the additional elements relating to the transmission of the images are insignificant post-solution activities to the judicial exception, and thus fail to integrate the judicial exception into a practical application due to the limitations amounting to mere data outputting, which does not meaningfully limit the processes of visual feature detection and ROI ranking. Thus, since the additional elements separately, and in combination, fail to meaningfully limit the method’s processes of visual feature detection and ROI ranking, and fail to provide an improvement to the technological field of machine vision technology in combination with the limitations directed to a mental processes: “wherein each of the first and the second ROIs are less than the entire FOV,” “generating a recurrence frequency for each of the first and second ROIs,” and “storing a default ranked set of ROIs based on the respective recurrence frequencies.” Therefore, the rejection of amended claim 1 under 35 U.S.C. 101 is maintained. Applicant’s arguments, see Remarks pages 4-5, filed 11/17/2025, with respect to the rejection of amended claim(s) 1 under 35 U.S.C. 102(a)(1) have been fully considered and are moot in view of the new grounds of rejection (detailed in the rejections below) necessitated by Applicant’s amendment to the claim(s). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea (mental process of detecting visual features in images) without significantly more. Claim 1 recite(s): “…analyzing…at least a portion of the first image to detect a visual feature within the first image”; and “…analyzing…at least a portion of the second image to detect the visual feature within the second image”; Which can be reasonably interpreted as a human observer mentally detecting visual features within a first and a second image. “…determining…a location of the visual feature within the first image”; and “…determining…a location of the visual feature within the second image”; Which can be reasonably interpreted as a human observer mentally determining the locations of the detected feature in the first and second images. “…determining…a first region of interest (ROI) within the first image based on the location of the visual feature”; and “…determining…a second ROI within the second image based on the location of the visual feature”; and “…wherein each of the first and the second ROIs are less than the entire FOV”; Which can be reasonably interpreted as a human observer mentally determining ROIs within the first and second images based on the determined locations of the detected features. “…ranking the first and second ROIs”; and “…wherein ranking the first and second ROIs includes applying each of the first and second ROIs to multiple images, analyzing the multiple images to determine whether the visual feature is located within each of the first and the second ROIs, generating a recurrence frequency for each of the first and second ROIs, and storing a default ranked set of ROIs based on the respective recurrence frequencies.”; Which can be reasonably interpreted as a human observer mentally ranking the first and second ROIs by mentally determining a recurrence frequency for each ROI and ordering the ROIs based on the determined recurrence frequency. “…analyzing…a third ROI within the third image to detect the visual feature, the third ROI within the third image being based on the ranking”; Which can be reasonably interpreted as a human observer mentally analyzing a third ROI, determined based on the ranking, in order to mentally detect the visual feature in a third image. This judicial exception is not integrated into a practical application because of additional elements: “…machine vision system including a computing device for executing an application and a fixed imaging device communicatively coupled to the computing device”; are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and pertain to a generically recited machine vision system including a generically recited computing device and a generically recited fixed imaging device. “…capturing, via the fixed imaging device, a first image over a field of view (FOV)”; and “…capturing, via the fixed imaging device, a second image”; and “…capturing, via the fixed imaging device, a third image”; are generically recited extra-solution activity of data gathering. “…responsive to detecting the visual feature in the third image, transmitting data associated with the visual feature in the third image to a host processor”; are generically recited extra-solution activity of data outputting. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of additional elements: “…machine vision system including a computing device for executing an application and a fixed imaging device communicatively coupled to the computing device”; are well-understood, routine, and conventional computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and pertain to a well-understood, routine, and conventional machine vision system including a well-understood, routine, and conventional computing device and a well-understood, routine, and conventional fixed imaging device. “…capturing, via the fixed imaging device, a first image over a field of view (FOV)”; and “…capturing, via the fixed imaging device, a second image”; and “…capturing, via the fixed imaging device, a third image”; are well-understood, routine, and conventional extra-solution activity of data gathering. “…responsive to detecting the visual feature in the third image, transmitting data associated with the visual feature in the third image to a host processor”; are well-understood, routine, and conventional extra-solution activity of data outputting. Depending claims 2-8 do not remedy these deficiencies. Claim(s) 2 further narrow(s) the scope pertaining to the abstract idea of mentally analyzing a third ROI by adding further details regarding setting the third ROI to be a higher rank ROI, or, if the feature is not within the higher rank ROI, a lower rank ROI and is/are not considered significantly more. Claim(s) 3 further narrow(s) the scope pertaining to the abstract idea of mentally analyzing a third ROI by adding further details regarding setting the updated third ROI to the FOV of the third image if the feature is not within the updated third ROI and is/are not considered significantly more. Claim(s) 4 recites: “…analyzing the third ROI that has been set to the FOV of the third image to detect the visual feature”; Which can be reasonably be interpreted as a human observer mentally analyzing the third ROI to detect a visual feature. “…determining…a location of the visual feature within the third image”; Which can be reasonably be interpreted as a human observer mentally determining the location of the visual feature in the third image. “…determining…a new third ROI within the third image based on the location of the visual feature”; Which can be reasonably be interpreted as a human observer mentally determining a new third ROI based on the location of the feature. “…determining…that the new third ROI is within a predetermined tolerance of the first ROI”; Which can be reasonably be interpreted as a human observer mentally determining if the new third ROI is within a tolerance of the first ROI. “…and in response to the determination that the new third ROI is within a predetermined tolerance of the first ROI, incrementing…a weighting factor of the first ROI.”; Which can be reasonably be interpreted as a human observer mentally incrementing a weighting factor of the first ROI in the case where the third ROI is within a tolerance of the first. This judicial exception is not integrated into a practical application because of additional elements: “…the application”; are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and pertain to a generically recited computing device executing a generically recited application. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of additional elements: “…the application”; are well-understood, routine, and conventional computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and pertain to a well-understood, routine, and conventional computing device executing a well-understood, routine, and conventional application. Claim(s) 5 recites: “…incrementing a weighting factor of the first ROI if the visual feature is determined to be within the first ROI”; and “…incrementing a weighting factor of the second ROI if the visual feature is determined to be within the second ROI”; Which can be reasonably interpreted as a human observer mentally incrementing a weighting factor of a first or second ROI based on whether a visual feature is contained within the ROI. “…re-ranking the first and second ROIs based on the weighting factors.”; Which can be reasonably interpreted as a human observer mentally re-ranking first and second ROIs based on weighting factors. This judicial exception is not integrated into a practical application because of additional elements: “…subsequent to the analyzing of the ROI of the third image, iteratively capturing images”; are generically recited extra-solution activity of data gathering. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of additional elements: “…subsequent to the analyzing of the ROI of the third image, iteratively capturing images”; are well-understood, routine, and conventional extra-solution activity of data gathering. Claim(s) 6 recites: “…the analyzing the at a portion of the first image to detect the visual feature comprises determining…a bounding box of the visual feature”; Which can be reasonably interpreted as a human observer mentally determining a bounding box for a visual feature. “…the determining the first ROI within the first image comprises applying a scaling factor to the bounding box”; Which can be reasonably interpreted as a human observer mentally increasing/decreasing the size of the bounding box. This judicial exception is not integrated into a practical application because of additional elements: “…the application”; are generically recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and pertain to a generically recited computing device executing a generically recited application. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of additional elements: “…the application”; are well-understood, routine, and conventional computer elements that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer and pertain to a well-understood, routine, and conventional computing device executing a well-understood, routine, and conventional application. Claim 7 further recites the additional elements of: “responsive to detecting the visual feature in the first image, transmitting data associated with the visual feature in the first image to the host processor” and “responsive to detecting the visual feature in the second image, transmitting data associated with the visual feature in the second image to the host processor”, which are generically recited and well-understood, routine, and conventional extra-solution activities of data outputting. Claim(s) 8 further narrow(s) the scope pertaining to the abstract idea of mentally ranking the first and second ROIs by adding further details regarding ranking the ROIs based on a user selection of either ROI and is/are not considered significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Petrou et al. (WO-2013085985-A1) hereinafter referenced as Petrou, in view of Ray et al. (US-20060261167-A1) hereinafter referenced as Ray, and Scott et al. (US-20200242392-A1) hereinafter referenced as Scott. Regarding claim 1, Petrou discloses: A method for operating a machine vision system (Petrou: Abstract), the machine vision system including a computing device for executing an application and a fixed imaging device communicatively coupled to the computing device, the method comprising (Petrou: Figure 1; 0038: “As shown in FIGURE 3, camera 163 may be disposed on the back side of the device. The camera angle may be fixed relative to the orientation of the device.”): (a) capturing, via the fixed imaging device, a first image over a field of view (FOV) (Petrou: Figure 9: Frame 911; 0065: “FIGURE 9 illustrates three frames 911, 921 and 931 taken in sequence.”); (b) analyzing, via the application, at least a portion of the first image to detect a visual feature within the first image (Petrou: Figure 9: Features 913-916; 0065: “the processor has detected and recognized a number of objects in frames 911, 921 and 931. Specifically, the processor detected features 913, 923 and 933…the processor recognized features 914, 924 and 934 as corresponding with text, features 915 and 925 as corresponding with bar codes, and features 916, 926 and 936 as corresponding with a logo.”); (c) determining, via the application, a location of the visual feature within the first image; (d) determining, via the application, a first region of interest (ROI) within the first image based on the location of the visual feature (Petrou: Figure 12: Bounding boxes 1215-1217; 0065: “the processor has detected and recognized a number of objects in frames 911, 921 and 931. Specifically, the processor detected features 913, 923 and 933…the processor recognized features 914, 924 and 934 as corresponding with text, features 915 and 925 as corresponding with bar codes, and features 916, 926 and 936 as corresponding with a logo.”; Wherein detected features are surrounded in bounding boxes); (e) capturing, via the fixed imaging device, a second image (Petrou: Figure 9: Frame 921; 0065: “FIGURE 9 illustrates three frames 911, 921 and 931 taken in sequence.”); (f) analyzing, via the application, at least a portion of the second image to detect the visual feature within the second image (Petrou: Figure 9: Features 923-926; 0065: “the processor has detected and recognized a number of objects in frames 911, 921 and 931. Specifically, the processor detected features 913, 923 and 933…the processor recognized features 914, 924 and 934 as corresponding with text, features 915 and 925 as corresponding with bar codes, and features 916, 926 and 936 as corresponding with a logo.”); (g) determining, via the application, a location of the visual feature within the second image; (h) determining, via the application, a second ROI within the second image based on the location of the visual feature (Petrou: Figure 12: Frame 1212; 0065: “the processor has detected and recognized a number of objects in frames 911, 921 and 931. Specifically, the processor detected features 913, 923 and 933…the processor recognized features 914, 924 and 934 as corresponding with text, features 915 and 925 as corresponding with bar codes, and features 916, 926 and 936 as corresponding with a logo.”; Wherein detected features, once detected, are surrounded in bounding boxes); (i) ranking the first and second ROIs (Petrou: Figure 10; 0070: “The system and method may also weigh information obtained from the most recent frames more heavily than information obtained from older frames. For instance, when preparing a query based on the frequency of descriptions across three of the most recent frames, the processor may give an object a relative weight of 1.00 if the object only appears in the most recent frame, a weight of 0.25 if the object only appears in the oldest frame, and a weight of 1.75 (equal to 1.00+0.50+0.25) if the object appears in all three frames. The system and method may determine and weigh other signals than those described herein.”; Wherein the ranking of the features contained in the ROIs across the frames constitutes ranking the ROIs of the frames); (j) capturing, via the fixed imaging device, a third image (Petrou: Figure 9: Frame 931; Figure 16; 0082: “if a bottle of Brand OR Bleach appears in ten frames in a row, it may be more efficient to make a single query for the product and track its presence in the frames instead of making ten different queries and ranking an aggregated list of ten different results.”; Wherein images are iteratively captured and processed.); (k) analyzing, via the application, a third ROI within the third image to detect the visual feature, the third ROI within the third image being based on the ranking (Petrou: 0082: “By tracking those objects that are associated with the same item from frame to frame, or within a single frame, the system and method can avoid duplicative searches and apply greater or lesser weights to the information used during a search. For instance, as noted above, the fact that the same item appears in multiple frames may be an indication that the item is of interest to the user.”; Wherein the features/objects and their ROIs are tracked from previous frame ROI position to current frame ROI position as features appearing in multiple frames means that they have a larger weight/rank for the user.); and (l) responsive to detecting the visual feature in the third image, transmitting data associated with the visual feature in the third image to a host processor (Petrou: 0072: “A processor may select a subset of the returned results and display the selected subset to the user. This may include selecting the highest ranking result as the optimum annotation…The processor may also select as the optimum annotation the information that appears most applicable to the type of the recognized object, i.e., the address of a building if a building is recognized or a person's name if a person is recognized”; Wherein the higher/highest ranked results are processed and selected by the processor and transmitted to the user); wherein ranking the first and second ROIs includes applying each of the first and second ROIs to multiple images, analyzing the multiple images to determine whether the visual feature is located within each of the first and the second ROIs, and generating a recurrence frequency for each of the first and second ROIs (Petrou: Figure 10; 0070: “The system and method may also weigh information obtained from the most recent frames more heavily than information obtained from older frames. For instance, when preparing a query based on the frequency of descriptions across three of the most recent frames, the processor may give an object a relative weight of 1.00 if the object only appears in the most recent frame, a weight of 0.25 if the object only appears in the oldest frame, and a weight of 1.75 (equal to 1.00+0.50+0.25) if the object appears in all three frames. The system and method may determine and weigh other signals than those described herein.”; Wherein the weighting of the objects including the object frequency within the analyzed frames constitutes the generation of a recurrence frequency for each of the ROIs). Petrou does not disclose expressly: wherein ranking the first and second ROIs includes storing a default ranked set of ROIs based on the respective recurrence frequencies. Ray discloses: an automatic data collection device for the identification of identification of a target symbol, wherein the device is able to decode the identified target and rank, sort, or prioritize the detected targets (Ray: Abstract). The disclosed criteria for the prioritization of the detected targets includes the ranking of the currently identified targets based on a stored history of the previously selected targets (Ray: 0110: “Such criteria or preferences may reflect a preference for particular symbologies, for example ranking representations corresponding to machine-readable symbols of one symbology over those of one or more other symbologies.”; 0113: “The machine-readable symbol reader 12 can be programmed to identify and apply criteria based on previous operation or history. The previous use or history may be that of the machine-readable symbol reader 12 without regard to the user, that of the user without regard to the particular machine-readable symbol reader 12, or a combination. Where based on the historical use of a user, a user specific file will be maintained and transported or accessible between machine-readable symbol readers 12. The user specific file may be identified by a user identifier, and access may be controlled via a password or personal identification number. The processor 42 may, for example, determine a preference for machine-readable symbols of a particular symbology based on a preset or user configurable number of previous decode selections. Thus, for example, if in the previous five most recent operations, the user has consistently selected one symbology over another symbology, the processor can recognize that and employ that as a criteria in ranking.”) Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the weighing of detected objects disclosed by Petrou through the implementation of a user specific file as taught by Ray by weighing the detected objects based on their historical detection. The suggestion/motivation for doing so would have been “the user may define a region that should be ignored or the machine-readable symbol reader 12 may determine that a particular region such as the lower left corner of the field-of-view 14 should consistently be ignored based on the history of prior symbol acquisition and/or decoding.” (Ray: 0112). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Petrou in view of Ray does not disclose expressly: wherein each of the first and the second ROIs are less than the entire FOV. Thus, Petrou in view of Ray does not disclose expressly: a fixed camera capturing images over a field of view, wherein the detected objects are smaller than the FOV. Scott discloses: a system for capturing images of a product being purchased for the purposes of detecting whether the product is mislabeled (Scott: Figure 3; Abstract) containing a fixed overhead camera for detecting the product being scanned (Scott: Figure 3; 0052: “When the customer is scanning product 332 using scanner 324 , a camera mounted above scanner 324 captures the image of product 332 as the system is configured to actuate the camera whenever scanner 322 or scanner 324 reads an MRL.”). Wherein for object detection, a region of interest, smaller than the camera’s FOV, defining where the product most likely is, is fed to the object detection model (Scott: 0053: “instead of feeding the whole image into an object detection model, only a region of interest (ROI) is selected from the image and fetched to the object detection model.”; 0054: “ since the camera is stationary in relationship to scanner 322 , in other words, the spatial conditions for constructing the image is known; therefore, this ROI can be predetermined in the image.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to substitute the camera disclosed by Petrou in view of Ray with the fixed stationary camera disclosed by Scott. The suggestion/motivation for doing so would have been “since the camera is stationary in relationship to scanner 322 , in other words, the spatial conditions for constructing the image is known; therefore, this ROI can be predetermined in the image. ” (Scott: 0054). Further, one skilled in the art could have substituted the elements as described above by known methods with no change in their respective functions, and the substitution would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Petrou in view of Ray with Scott to obtain the invention as specified in claim 1. Regarding claim 2, Petrou in view of Ray and Scott discloses: The method of claim 1, wherein the analyzing the third ROI includes: setting the third ROI to be a higher ranked ROI from the first ROI and the second ROI (Petrou: 0081: “When the processor determines that different objects are likely associated with the same item, the processor may associate the objects with identifiers that are intended to track the item from frame to frame.”; 0082: “By tracking those objects that are associated with the same item from frame to frame, or within a single frame, the system and method can avoid duplicative searches and apply greater or lesser weights to the information used during a search. For instance, as noted above, the fact that the same item appears in multiple frames may be an indication that the item is of interest to the user.”; Wherein the ROIs of the features in the third image are initially tracked from their previous position in the previous frame). Petrou in view of Ray and Scott does not disclose expressly: if the visual feature is determined not to be within the set third ROI, updating the third ROI to be a lower ranked ROI of the first ROI and the second ROI. Ray further discloses: the usage of the previous positions of the machine-readable symbols detected in a previous image in order to identify an object present in the image (Ray: Figure 9: 602-608; 0094: “At 112, the machine-readable symbol reader 12 acquires an image of the objects in the field-of-view 14 of the machine-readable symbol reader 12. At 114, the machine-readable symbol reader 12 locates potential targets (e.g., representations of machine-readable symbols 16a-16g) in the image. Areas of the image containing potential targets may be referred to as regions of interest (ROI). As described in detail below, the machine-readable symbol reader 12 may locate potential targets using a high level machine vision processing routine to identify representations in the image of objects which have characteristics that correspond to machine-readable symbols.”; 0120-0121: “At 606, the processor 42 checks for the object based on a previous position of the representation of the object (e.g., machine-readable symbol) in the image…If the object is located at 608, the processor 42 determines the characteristics of the located object at 614. The characteristics may include the extent and/or content of the object.”; Wherein multiple representations may be detected). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique of checking the positions of the previous ROIs in the new image as further taught by Ray prior to performing optical flow analysis disclosed by Petrou in view of Ray and Scott. The suggestion/motivation for doing so would have been “Method 600 may be useful in adjusting to or accommodating for movement of the reader particularly where the reader is handheld.” (Ray: 0119; Wherein the device/camera may be moved, or the items in the image may be present in the image, in a similar manner as previously thus saving processing required to detect the item in a new position.). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Petrou in view of Ray and Scott with the further teaching of Ray to obtain the invention as specified in claim 2. Regarding claim 3, Petrou in view of Ray and Scott discloses: The method of claim 2, wherein the analyzing the third ROI further includes: if the visual feature is determined not to be within the updated third ROI, setting the third ROI to be a FOV of the third image (Petrou: 0089: “By way of example only, a Lucas-Kanade pyramidal optical flow method may be used to track feature correspondence between images. Coarse-to-fine tracking may be performed by iteratively adjusting the alignment of image patches around the points from image to image, starting with the smallest, coarsest pyramid level and ending with the finest pyramid level. The feature correspondences may be stored in a circular buffer for a certain period of time such as a number of seconds. This may allow the processor to replay the flow information in order to align features from an earlier image, which may be annotated, with their position within the latest image.”; Wherein the optical flow methods used to track features in the latest frame constitutes setting the ROI to an FOV.). Regarding claim 4, Petrou in view of Ray and Scott discloses: The method of claim 3, wherein the analyzing the third ROI further includes: analyzing the third ROI that has been set to the FOV of the third image to detect the visual feature; determining, via the application, a location of the visual feature within the third image; determining, via the application, a new third ROI within the third image based on the location of the visual feature (Petrou: Figure 16: t0 & t1; 0089: “a Lucas-Kanade pyramidal optical flow method may be used to track feature correspondence between images. Coarse-to-fine tracking may be performed by iteratively adjusting the alignment of image patches around the points from image to image, starting with the smallest, coarsest pyramid level and ending with the finest pyramid level…This may allow the processor to replay the flow information in order to align features from an earlier image, which may be annotated, with their position within the latest image…The resulting point is where the original point would be located if it followed the overall transformation between frames…the processor may analyze some or all of the points around an area of interest, weigh them by distance to the center of the area, remove outliers and compute a weighted translation and scale based on the remaining points.”; Wherein the optical flow algorithm is used to track an item/feature from its position in a previous frame to its position in the current frame.); determining, via the application, that the new third ROI is within a predetermined tolerance of the first ROI; and in response to the determination that the new third ROI is within a predetermined tolerance of the first ROI, incrementing, via the application, a weighting factor of the first ROI (Petrou: 0080: “ if the processor detects a bar code in three different frames and the bounding boxes for the bar codes substantially overlap, the processor may assume that the camera was pointed at the same bar code even if the first two frames yielded a different bar code value, e.g. "12345789", than the third frame, e.g., "12345780". The processor may thus search only for the most popular bar code value, e.g., "12345789", because more images yielded that value in that location than the others. Alternatively, the processor may submit both of the values to the search engine but request that the search engine place more weight on the most popular value.”; 0082: “By tracking those objects that are associated with the same item from frame to frame, or within a single frame, the system and method can avoid duplicative searches and apply greater or lesser weights to the information used during a search.”). Regarding claim 5, Petrou in view of Ray and Scott discloses: The method of claim 1, further comprising: subsequent to the analyzing of the ROI of the third image, iteratively capturing images (Petrou: 0006: “ the device may simultaneously display two or more of the following: (a) the image sent to the server, (b) an image visually similar to the image sent to the server, such as a subsequent frame of a video stream”), and, at each iteration: incrementing a weighting factor of the first ROI if the visual feature is determined to be within the first ROI; incrementing a weighting factor of the second ROI if the visual feature is determined to be within the second ROI; and re-ranking the first and second ROIs based on the weighting factors (Petrou: 0080: “For instance, if the processor detects a bar code in three different frames and the bounding boxes for the bar codes substantially overlap, the processor may assume that the camera was pointed at the same bar code even if the first two frames yielded a different bar code value, e.g. "12345789", than the third frame, e.g., "12345780". The processor may thus search only for the most popular bar code value, e.g., "12345789", because more images yielded that value in that location than the others. Alternatively, the processor may submit both of the values to the search engine but request that the search engine place more weight on the most popular value.”; Wherein each feature detected in the ROIs in each frame is weighted and thus ranked based on the frequency of detection of the specific feature in the ROI.). Regarding claim 6, Petrou in view of Ray and Scott discloses: The method of claim 1, wherein: the analyzing the at a portion of the first image to detect the visual feature comprises determining, via the application, a bounding box of the visual feature; and the determining the first ROI within the first image comprises applying a scaling factor to the bounding box (Petrou: Figure 14; 0079: “FIGURE 14 illustrates a sequence of frames 1411 and 1421. The processor detects three shapes in the first image, namely, the bottle shape, logo and bar code. The processor further determines a bounding box 1412-14 for each shape.”; Wherein the fitting of the bounding boxes to the size of the features constitutes applying a scaling factor to each bounding box.). Regarding claim 7, Petrou in view of Ray and Scott discloses: The method of claim 1, further comprising: responsive to detecting the visual feature in the first image, transmitting data associated with the visual feature in the first image to the host processor; and responsive to detecting the visual feature in the second image, transmitting data associated with the visual feature in the second image to the host processor (Petrou: Figures 9 & 10; 0072: “A processor may select a subset of the returned results and display the selected subset to the user. This may include selecting the highest ranking result as the optimum annotation.”; Wherein the higher ranking features detected in the first and second images are processed and selected by the processor.). Regarding claim 8, Petrou in view of Ray and Scott discloses: The method of claim 1. Petrou in view of Ray and Scott does not disclose expressly: wherein the ranking the first and second ROIs comprises: presenting, to a user, a representation of the first ROI and the second ROI; receiving, from the user, a user selection of either the first ROI or the second ROI; and setting the ranks of the first ROI and the second ROI based on the user selection. Ray further discloses: the ranking of ROIs detected in the image based on user configurable preferences, such as the user being able to select regions/ROI present in the field of view of the image to ignore, thus removing them from detection (Ray: 0109: “the processor 42 may rank, sort, order or prioritize regions of interest based on a predefined criteria or preference, a user defined preference, or criteria or preference determined from a historical pattern of usage of the machine-readable symbol reader 12 and/or user.”; 0112: “the user may define a region that should be ignored or the machine-readable symbol reader 12 may determine that a particular region such as the lower left corner of the field-of-view 14 should consistently be ignored based on the history of prior symbol acquisition and/or decoding.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique as further taught by Ray of ranking features and ROIs present in images based on user preferences into the ranking of features disclosed by Petrou in view of Ray and Scott. The suggestion/motivation for doing so would have been to prioritize displaying/processing the information/regions the user recognizes to be important (Ray: 0111). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Petrou in view of Ray and Scott with the further teaching of Ray to obtain the invention as specified in claim 8. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J RODRIGUEZ whose telephone number is (703)756-5821. The examiner can normally be reached Monday-Friday 10am-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANTHONY J RODRIGUEZ/ Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Sep 30, 2022
Application Filed
Jun 10, 2025
Non-Final Rejection — §101, §103
Nov 17, 2025
Response Filed
Jan 19, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499701
DOCUMENT CLASSIFICATION METHOD AND DOCUMENT CLASSIFICATION DEVICE
2y 5m to grant Granted Dec 16, 2025
Patent 12488563
Hub Image Retrieval Method and Device
2y 5m to grant Granted Dec 02, 2025
Patent 12444019
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND MEDIUM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
17%
Grant Probability
-5%
With Interview (-21.4%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month