Prosecution Insights
Last updated: April 19, 2026
Application No. 18/371,558

Systems and Methods for Validated Training Sample Capture

Non-Final OA §101§102§103
Filed
Sep 22, 2023
Examiner
YAO, JULIA ZHI-YI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Zebra Technologies Corporation
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
47 granted / 69 resolved
+6.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 69 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-16 are pending for examination in the Application No. 18/371,558 filed September 22nd, 2023. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The examiner respectfully suggests the following title as written in the Derwent database for Family Patent Application No. WO 2025/064205 A1: "Systems and Methods for Validated Training Sample Capture in Retail Facility for Classification Model Training". Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The claim(s) recite(s) methods and a device configured to generate a training sample for a classification model. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory). According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1, 8, and 15 ] are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories? YES; claim(s) 1, 8, and 15 is/are directed to methods (i.e., a processes) and a device (i.e., an apparatus). STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES; The claims are directed toward a mental process (i.e. abstract idea). With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). The method in claim 1 (and the device and method in claim(s) 8 and 15, respectively) comprise a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea. Regarding Claim(s) 1, 8, and 15: the method recites the steps (functions) of: generating, from the image, a region of interest bounding the item (mental process including observation and evaluation, and can be done mentally in the human mind); obtaining, from the image, candidate label data corresponding to the item (mental process including observation and evaluation, and can be done mentally in the human mind); receiving a validation input associated with the candidate label data (mental process including observation and evaluation, and can be done mentally in the human mind—e.g., the “validation input” can be a confirmation, acceptance, edit, etc.); and in response to the validation input, generating a training sample for a classification model, the training sample including (i) the region of interest and (ii) label data corresponding to the item (mental process including observation and evaluation, and can be done mentally in the human mind). These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Claim(s) 1, 8, and 15 does/do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Claim(s) 1, 8, and 15 recite(s) the further limitations of: capturing an image of an item (insignificant pre/post-solution extra activity of generating data; where the data is the image); a sensor (generic computer(s) or component(s) configured to perform the method); a processor… (generic computer(s) or component(s) configured to perform the method); and …at a computing device… (generic computer(s) or component(s) configured to perform the method). These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere pre/post-solution actions, which is/are a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Claim(s) 1, 8, and 15 does/do not recite any additional elements that are not well-understood, routine or conventional. The use of a computer to capture, generate, obtain, receive, etc., as claimed in Claim(s) 1, 8, and 15 is/are a routine, well-understood and conventional process(es) that is/are performed by computers. Thus, since Claim(s) 1, 8, and 15 is/are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that Claim(s) 1, 8, and 15 is/are not eligible subject matter under 35 U.S.C 101. Regarding claim 2-7, 9-14, and 16: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s) recite either: (1) a mental process including observation and evaluation, and can be done mentally in the human mind (i.e., claim(s) 2 (and similar claim 9), 3 (and similar claim 10), 6 (and similar claim 13), and 7 (and similar claim 14) recite(s) the mental process(es) of: detecting and decoding a barcode or detecting text; determining position of the barcode; determining item recognition data including a confidence level and displaying the region of interest with visual attributes based on a confidence level threshold; and determining a plurality of sets of item recognition data with respective confidence levels wherein the validation input includes a selectin of one of the plurality of set of item recognition data, respectively); (2) insignificant pre/post-solution extra activity of generating data (i.e., claim(s) 4 (and similar claim 11), 5 (and similar claim 12), and 16 recite(s) the activity/activities of generating/gathering: receiving text or controlling a sensor; label data including a portion of candidate label data or updated label data; and receiving input data, respectively); and/or (3) generic computers or components configured to perform the method (i.e., claims 6 (and similar claim 13) and 7 (and similar claim 14), and claim 16 recite(s) the generic computer(s) or component(s) of: executing a classification model and a computing device, respectively). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5 and 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over Chaubard (US 2020/0005225 A1) in view of Patil et al. (Patil-2024; US 2024/0212371 A1). Regarding claim 1, Chaubard discloses a method, comprising: capturing an image of an item (para(s). [0030-0031], recite(s) [0030] “…one or more cameras of the out-of-stock detection system capture a template image of a fully stocked product display area for each product display area in a store. The system identifies product labels in the template image and generates a bounding box for each product label based on a location of each product label within the product display area. From each product label, the system identifies product information from each product label.” [0031] “Accordingly, using the identified product information, the system identifies a product associated with each product label. In one embodiment, the system extracts product information from each product label, which includes information describing the product associated with the product label, as described above. With the product identified for each bounding box, the system associates the identified product with the corresponding bounding box. The system then generates individual product bounding boxes for each individual product within a single bounding box for each bounding box and identifies a number of individual products for the product display location based on the number individual product bounding boxes within each bounding box. …” , where the “template image” is an image of at least an item (e.g., “individual product”)); generating, from the image, a region of interest bounding the item (para(s). [0031]—see preceding citation above—, where “generat[ing] individual product bounding boxes for each individual product” is generating a region of interest bounding each item); obtaining, from the image, candidate label data corresponding to the item (para(s). [0030-0031]—see preceding citation above—, where the “product label” is a candidate label data corresponding to the item); receiving a(n)(para(s). [0041] and [0057], recite(s) [0041] “Moreover, the out-of-stock detection system 160 provides a tool for store associates to update the planogram when a shelf is changed in the store and relay that information directly to the backend of the system. After an associate changes a shelf, the associate may use store client device 150 showing the previous picture of the shelf and the previous products. The system, from the store client device (e.g., a tablet computer), enables the store associate to cause a picture to be captured from the relevant shelf camera 100 and manually mark the image as the new template image for use as a template by the system.” [0057] “The user interface module 180 can provide a user interface to the store client device 150 for capturing template images of store shelves that hold products for sale by the store. Additionally, the user interface module 180 may provide a user interface to the store client device 150 for labeling products in template images. The user interface module 180 receives images from the camera 100 and the store client device 150 and stores the images in the data store 185.” , where a user updates the “labeling [of] products in template images” such as when “a shelf is changed in the store” is receiving an input associated with the candidate label); and in response to the(para(s). [0041]—see preceding citation above—, where para(s). [0048-0049] and [0058] further recite(s): [0048] “…For example, store shelves may display a product label that contains a barcode, or written numerical values that represent the stock keeping unit, for each product on the shelves, and the product detection module 150 may identify the product above each product label as the product associated with the barcode or stock keeping unit. …” [0049] “…The product-detection model can be trained based on template images that have been labeled by the store associate or other human labeler. …” [0058] “The data store 185 stores data used by the out-of-stock detection system 160. For example, the data store 185 can store images from the camera 100 and the store client device 150. The data store 185 can also store location information associated with template images, and can store products identified in images by the product detection module 150. The data store 185 can also store product information, a store map or planogram, shopper information, or shopper location information. …” , where the “template image” is a training sample for training a classification model (e.g., a “product detection model” which identifies products) including (i) the region of interest (e.g., individual products) and (ii) label data corresponding to the item (e.g., “store products identified in images”),). Where Chaubard does not specifically disclose receiving a validation input associated with the candidate label data; and in response to the validation input, generating a training sample for a classification model…; Patil-2024 teaches in the same field of endeavor product-detection models in retail images receiving a validation input associated with the candidate label data (para(s). [0105] and [0109], recite(s) [0105] “In some examples, annotated images 420c which are determined to have unacceptable annotations are prioritized (for example, assigned a priority) for further annotation. Prioritization application 418 may prioritize images that lack complete labels. …In a particular example, where 420c is an annotated real-time image 328, and has been assigned an annotation of a specific product but at a confidence level below a predefined confidence level threshold, the prioritization application 418 may assign annotated image 420c a high priority for additional annotation.” [0109] “In some examples, annotated image 420d may receive additional annotations generated by annotating user 434. Annotating user 434 may utilize annotation tool 444 to view and annotate/label/tag annotated image 420d. Annotating user 434 may add one or more additional annotations, modify one or more existing annotations, verify one or more existing annotations, or remove one or more existing annotations. In some examples, annotating user 434 may provide product curations/identifications for retail products sold by the enterprise.” , where modifications of “one or more existing annotations” in an pre-annotated image by a user is receiving a validation input association with a candidate label (e.g., “product curations/identifications for retail products”)); and in response to the validation input, generating a training sample for a classification model… (para(s). [0081], [0110], [0112-0113], and [0117], recite(s) [0081] “In some examples, object detection application 312 may detect objects within stored image 320b, using a model selected based at least on the context assigned to stored image 320b. In some examples, specific products may be detected. In some examples, bounding boxes may be placed on the image. In some examples, specific product crops may be generated.” [0110] “…the re-annotated image 420e which now includes an assigned context and acceptable annotations may be stored in central database 330.” [0112] “…providing annotated images for re-training context and annotation models, according to an example.” [0113] “In some examples, stored image 520, which may be stored in central database 330, is available to consuming user 540. In some examples, stored image 520 may represent annotated image 420b (which acquired acceptable annotations via annotation models 106, 306) or re-annotated image 420e (which acquired acceptable annotations via annotation models 106, 306 and annotation tool 444). Consuming user 540 may utilize the stored image 520 for a multitude of purposes, including: in advertising and marketing campaigns; to attract new customers and retain or increase the patronage of new customers; to analyze product shelf placement; train or build machine learning models; detect safety hazards in brick-and-mortar retail stores warehouses, mixing centers, parking lots, or other physical areas; to identify store or retail display stock levels; to perform other necessary enterprise tasks. In one example, consuming user 540 may utilize one or more downstream applications to analyze the annotated image, and detect conditions associated with the image. This is particularly the case for real-time images, for example images of shelf stock. In examples, the consuming user 540 may utilize a downstream application used to detect out of stock events …” [0117] “In some examples, when the accuracy/detection of a particular type of object or particular product is lower than desired. For example, “lamps” within a particular context (e.g., “furniture”) may not being detected at a rate above a particular accuracy threshold, for example based on a manual audit of images, based on, e.g., an assessment of image recognition confidence, or based on feedback at the time of image capture. In such cases, one or more models of annotation models 106 may be re-trained using a subset of stored images 520 that include accurate annotations of “lamp” within that context.” , where retraining a machine learning model using the “re-annotated image” is generating training samples for a classification model (e.g., an “object detection” model)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Chaubard to incorporate receiving a validation input associated with a candidate label and, in response to the validation input, generate a training sample for the product detection classification model, the training sample including the region of interest and label data corresponding to the item, to improve the accuracy and/or detection of the annotations detected by the product detection classification model of Chaubard when the annotation(s) of a specific product is below a confidence level as taught by Patil-2024 above (see para. [0105] above). Regarding claim 2, Chaubard in view of Patil-2024 discloses the method of claim 1, wherein Chaubard further discloses obtaining the candidate label data includes at least one of: detecting a barcode in the image and decoding the barcode, or detecting text in the region of interest (para(s). [0018-0019], recite(s) [0018] “Accordingly, there are multiple ways to identify the product 110 associated with each bounding box 120. In one embodiment, the out-of-stock detection system extracts product information from each product label 115, which includes information describing the product 110 associated with the product label 115. This information includes the product name and brand, price, barcode, and stock keeping unit (SKU) number, among potentially other types of information, and the out-of-stock detection system extracts this information to identify the product 110 associated with each product label 115 and, therefore, bounding box 120.” [0019] “In one embodiment, an optical character recognition (OCR) algorithm is used to identify text of the product labels 115. Additionally, a barcode detection algorithm could be used to detect barcodes within the product label 115 and identify the products 110 based on the barcodes. While each camera 100 is capable of operating at a low image resolution to conserve energy, the cameras 100 may capture relatively high-resolution images during the onboarding process, and may optionally zoom in on each product label 115 individually in order to be able to clearly identify this information within each product label 115.” , where “barcode detection” is detecting and decoding a barcode in the image and “identify[ing] text of product labels” is detecting text in the region of interest), and performing optical character recognition on the detected text (para(s). [0019]—see preceding citation above). Regarding claim 3, Chaubard in view of Patil-2024 discloses the method of claim 2, wherein Chaubard further discloses detecting the barcode includes determining that a position of the barcode in the image relative to the region of interest satisfies an association criterion (para(s). [0052], recite(s) [0052] “In some embodiments, the out-of-stock detection module 175 uses product labels on shelves to determine if an item is out of stock. The out-of-stock detection module 175 can detect product labels bounding boxes in the images received from the cameras 100 and can extract information from each product label like the barcode, the stock keeping unit, the price, the sale price if it is on sale, the name of the product, and any other visual information that is on the product label. For example, the out-of-stock detection module 175 may extract the name of a product, stock keeping unit, or a barcode from the product label and the out-of-stock detection module 175 may determine that the products identified by the product labels should be detected in the image received from the camera 100 near that product label. If the out-of-stock detection module 175 does not detect the items identified by the product label near (e.g., immediately above or below) that tag, the out-of-stock detection module 175 determines that the items are out of stock.” , where determining that the “products identified by the product labels” such as a “barcode” are “near that product label” is determining that a position of the barcode in the image relative to the region of interest satisfies an association criterion (e.g., “near”)). Regarding claim 4, Chaubard in view of Patil-2024 discloses the method of claim 1, wherein Chaubard further discloses receiving the validation input includes at least one of: receiving text defining a description of the item (para(s). [0018-0019]—see citations in claim 2 above—, where “text of product labels” is text defining a description of the item), or controlling a sensor to scan a barcode associated with the item (para(s). [0048], recite(s) [0048] “The image collection module 165 collects images from the cameras 100 and, in some embodiments, from the store client device 150. The image collection module 165 stores collected images and image labeling data in the data store 190. The product detection module 150 detects products in images captured by the cameras 100. In some embodiments, the product detection module 150 identifies products in the received images automatically. For example, the product detection module 150 may apply an optical character recognition (OCR) algorithm to the received images to identify text in the images, and may determine which products are captured in the image based on the text (e.g., based on whether the text names a product or a brand associated with the product). The product detection module 150 also may use a barcode detection algorithm to detect barcodes within the images and identify the products based on the barcodes. For example, store shelves may display a product label that contains a barcode, or written numerical values that represent the stock keeping unit, for each product on the shelves, and the product detection module 150 may identify the product above each product label as the product associated with the barcode or stock keeping unit. …” , where the “cameras” are a sensor controlled to collect images including scanning (i.e., detecting) “barcodes” associated with products in the image). Regarding claim 5, Chaubard in view of Patil-2024 discloses the method of claim 1, wherein Chaubard further discloses the label data corresponding to the item includes at least one of: at least a portion of the candidate label data, or updated label data defined by the validation input (para(s). [0030-0031]—see citations in claim 1 limitation “capturing an image of an item” above—, where Chaubard at least discloses the label data corresponding to the item includes at least a portion of the candidate label data as the “product label”). Regarding claim 8, the claim differs from claim 1 in that the claim is in the form of a computing device, comprising: a sensor; and a processor configured to perform the method of claim 1. Chaubard discloses said sensor and processor (para(s). [0048]—see citation in claim 4 above—, where the “cameras” are sensors; and para(s). [0062], recite(s) [0062] “…Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.” ). Therefore, claim 8 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Regarding claim 9, the claim recites similar limitations to claim 2 and is rejected for similar rationale and reasoning (see the analysis for claim 2 above). Regarding claim 10, the claim recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above). Regarding claim 11, the claim recites similar limitations to claim 4 and is rejected for similar rationale and reasoning (see the analysis for claim 4 above). Regarding claim 12, the claim recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above). Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Chaubard in view of Patil-2024 as applied to claim(s) 1 or 8 above, and further in view of Geisner et al. (Geisner; US 2013/0085345 A1). Regarding claim 6, Chaubard in view of Patil-2024 discloses the method of claim 1, wherein Patil-2024 further teaches the method of claim 1 further comprising: in response to detecting the region of interest, executing the classification model to determine item recognition data corresponding to the item, the item recognition data including a confidence level (para(s). [0105]—see citation in claim 1 “receiving a validation input…” above—, where “annotation[s] of specific product” (i.e., classifications of a specific product) include a “confidence level”); and displaying the region of interest with a first visual attribute(para(s). [0031]—see citation in claim 1 limitation “capturing an image of an item” above—, where a “bounding box” is at least a first visual attribute). Where Chaubard in view of Patil-2024 does not specifically disclose displaying the region of interest with a first visual attribute if the confidence level satisfies a threshold, or a second visual attribute if the confidence level does not satisfy the threshold; Geisner teaches in the same field of endeavor of product detection displaying the region of interest with a first visual attribute if the confidence level satisfies a threshold, or a second visual attribute if the confidence level does not satisfy the threshold (para(s). [0153], recite(s) [0153] “…The image data for a given food product can be an image of packaging or the food from one or more perspectives, e.g., from the front or side. Some pattern recognition algorithms are probabilistic in nature, in that they use statistical inference to find the best match for a food product. In this case, the algorithm provides a value of a degree of confidence associated with a choice/potential match. If the degree of confidence is below a threshold level, the user can be instructed to further investigate the food item. See FIG. 12D. The degree of confidence can be expressed by a color code of an augmented reality image, e.g., green for high confidence, yellow for medium confidence and red for low confidence.” , where “green” is a first visual attribute if a confidence level satisfies a threshold of “high confidence”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Chaubard in view of Patil-2024 to incorporate displaying the region of interest with at least a first attribute if the confidence level satisfies a threshold to highlight to the user any detected products below a certain confidence threshold for further investigation as taught by Geisner above. Regarding claim 13, the claim recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above). Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Chaubard, as modified by Patil-2024 and Geisner, as applied to claim(s) 6 or 13 above, and further in view of Patil et al. (Patil-2023; US 2023/0274226 A1). Regarding claim 7, Chaubard, as modified by Patil-2024 and Geisner, discloses the method of claim 6, wherein Patil-2023 teaches in the same field of endeavor of product detection further comprising: via execution of the classification model, determining a plurality of sets of item recognition data with respective confidence levels (para(s). [0138-0139], recite(s) [0138] “In example operation 806, the comparison of the product fingerprint to product fingerprints in the database may result in a match percentage, wherein the match percentage is a percentage of the predicted accuracy associated with the match. The product recognition sub-module 208 may identify the store product that is the closest match to the detected product that needs to be identified. For example, the product recognition sub-module 208 may identify one or more store products that have fingerprints that closely match the fingerprint of the product from the processed image that is the subject of the product recognition analysis. The comparison of the product fingerprint to the plurality of stored product fingerprints may yield matches with varying confidence levels.” [0139] “For example, in case of a 10 oz box Multi-grain Cheerios cereal, the product recognition sub-module 208 may compare a generated fingerprint of the image of the 10 oz box of Multi-grain Cheerios to the database of product fingerprints. Upon completing the comparison process, the product recognition sub-module 208 may determine that several close matches exist, including a 10 oz box of Multi-grain Cheerios, an 18 oz box of Multi-grain Cheerios, a 10 oz box of Honey Nut Cheerios. The product recognition sub-module 208 may return the match results with confidence levels associated with each level. For example, product recognition sub-module 208 may determine that the match associated with the 10 oz box of Multi-grain Cheerios resulted in 92% match, whereas the match associated with the 18 oz box of Multi-grain Cheerios resulted in a 70% match and a match associated with the 10 oz box of Honey Nut Cheerios resulted in a 48% match.” , where the “confidence levels” associated with each classification “match result[s]” is a plurality of sets of item recognition data with respective confidence levels); wherein the validation input includes a selection of one of the sets of item recognition data (para(s). [0140] and [0142], recite(s) [0140] “In example operation 808, the product recognition sub-module 208 may receive the one or more potential matches determined in operation 806 and analyze the matches further with additional data in order to arrive a final product identification. For example, the refinement process may utilize additional reference sources including product packaging elements such as the text on the product packaging and color of the product packaging, category of the product, the location of the product, including shelf location, aisle location and geo-location in order to improve the overall accuracy and confidence level associated with the product identification.” [0142] “The product recognition sub-module 208 may use the additional information in order to analyze the one or more matches from operation 806 to reevaluate the confidence levels associated with the matches and arrive at a final match that is used to identify the product in the detected image from the shelf image.” , where the “final match” or “final product identification” is selecting one of the sets of item recognition data as the validation input). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Chaubard, as modified by Patil-2024 and Geisner, to incorporate selecting one of the sets of item recognition data as the validation input to improve the overall accuracy of product identification as taught by Patil-2023 above (see para. [0140] above). Regarding claim 14, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 15-16 are rejected under 35 U.S.C. 102(1) as being anticipated by Chaubard (US 2020/0005225 A1). Regarding claim 15, Chaubard discloses a method, comprising: capturing, at a computing device, an image of an item (para(s). [0030-0031]—see similar limitation “capturing an image of an item…” in claim 1 above—, where para(s). [0062]—see citation in claim 8 above—further recite(s) the computing device as processor(s)); determining a boundary containing the item in the image (para(s). [0031]—see similar limitation “generating, from the image, a region of interest bounding the item…” in claim 1 above—, where a bounding box is a boundary containing the item); obtaining, prior to capturing a further image, label data corresponding to the item (para(s). [0030-0031]—see similar limitation “obtaining, from the image, candidate label data corresponding to the item…” in claim 1 above—, where the “product label” is label data); and generating a training sample for a classification model, the training sample including (i) the boundary and (ii) label data corresponding to the item (para(s). [0041], [0048-0049], and [0058]—see citation(s) in similar limitation “in response to the validation input, generating a training sample for a classification model…” in claim 1 above—, where the “region of interest” in claim 1 is the boundary). Regarding claim 16, Chaubard discloses the method of claim 15, wherein obtaining the label data includes receiving input data at the computing device, the input data defining an identifier affixed to the item (para(s). [0018-0019]—see citation(s) in claim 2 above—, where a barcode is an identifier affixed to an item). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.Z.Y./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Sep 22, 2023
Application Filed
Sep 17, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597169
ACTIVITY PREDICTION USING PORTABLE MULTISPECTRAL LASER SPECKLE IMAGER
2y 5m to grant Granted Apr 07, 2026
Patent 12586219
Fast Kinematic Construct Method for Characterizing Anthropogenic Space Objects
2y 5m to grant Granted Mar 24, 2026
Patent 12579638
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM FOR PERFORMING DETERMINATION REGARDING DIAGNOSIS OF LESION ON BASIS OF SYNTHESIZED TWO-DIMENSIONAL IMAGE AND PRIORITY TARGET REGION
2y 5m to grant Granted Mar 17, 2026
Patent 12562063
METHOD FOR DETECTING ROAD USERS
2y 5m to grant Granted Feb 24, 2026
Patent 12561805
METHODS AND SYSTEMS FOR GENERATING DUAL-ENERGY IMAGES FROM A SINGLE-ENERGY IMAGING SYSTEM BASED ON ANATOMICAL SEGMENTATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+35.7%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 69 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month