Prosecution Insights
Last updated: April 19, 2026
Application No. 18/072,173

Devices, Systems, and Methods for Automated Weight Measuring and Inventory Management of Consumer Goods

Non-Final OA §102
Filed
Nov 30, 2022
Examiner
WERONSKI, MATTHEW S
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Zebra Technologies Corporation
OA Round
3 (Non-Final)
10%
Grant Probability
At Risk
3-4
OA Rounds
4y 0m
To Grant
29%
With Interview

Examiner Intelligence

Grants only 10% of cases
10%
Career Allow Rate
11 granted / 115 resolved
-42.4% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
32 currently pending
Career history
147
Total Applications
across all art units

Statute-Specific Performance

§101
31.5%
-8.5% vs TC avg
§103
37.7%
-2.3% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 115 resolved cases

Office Action

§102
DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/15/2025 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority The instant application does not claim priority to any other application. For the purpose of examination, the effective filing date of the instant application regarding prior art is November 30th, 2022. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5-16 and 18-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Buibas et al. (US 2020/0020112 A1). Regarding Claims 1, Buibas teaches: A method comprising: training a machine learning model based on a set of training images (See Buibas ¶ [0139] – machine learning techniques used to train classification algorithms/ neural networks based on image data); detecting, by a scale, a first change in weight of a display area (As the specification defines a scale device as an electronic scale detecting changes in weight, see Buibas ¶ [0182] – using weight sensors [electronic scales] to detect item movement from a display shelf); capturing, by a first camera, a first image of the display area (See Buibas ¶ [0134-0135] – using image data to determine a person in a store has interacted with an item in an item storage area); detecting, by one or more processors, a first object in the first image (See Buibas ¶ [0134-0135] – using image data to determine a person in a store has interacted with an item in an item storage area), the first object having a value based on weight (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example); segmenting, by the one or more processors, a portion of the first image corresponding to the detected first object (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items); providing the segmented portion of the first image to the trained machine learning model (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items); utilizing the trained machine learning model to identify the first object (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items); associating, by the one or more processors, the detected first change in weight with the identified first object (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example); determining, by the first camera, a transaction value associated with the identified first object based on the detected first change in weight and the first object value (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0150-0152] – cameras detection of user interaction with items, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item and Fig. 13 – showing a rendition of a monitored display area and displaying prices [transaction value] of items selected by users); and displaying, by a display, a rendition of the display area, the rendition containing a representation of the first object and the transaction value associated with the identified first object based on the detected first change in weight (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item and Fig. 13 – showing a rendition of a monitored display area and displaying prices [transaction value] of items selected by users). Regarding Claims 2, Buibas teaches: The method of claim 1, wherein the transaction value associated with the identified first object is based on at least one of a price of the identified first object, or a first object identifier (It is noted that these limitations do not further functionally limit the method of claim 1 and merely describe the transaction value associated with the identified first object, therefore, these limitations are considered non-functional descriptive material and are not given patentable weight, nonetheless, see Buibas Fig. 13 – showing recognize transaction value by item price and description). Regarding Claims 3, Buibas teaches: The method of claim 1, wherein identifying the first object comprises: identifying, by the one or more processors, one or more features of the detected first object in the image (See Buibas ¶ [0278-0279] – using features of items in a 3D space to identify said items); and identifying, by the one or more processors, the first object based on the one or more features (See Buibas ¶ [0278-0279] – using features of items in a 3D space to identify said items). Regarding Claims 5, Buibas teaches: The method of claim 1, wherein the trained machine learning model is a deep learning neural network (As the specification of the instant application describes a deep learning neural network as comprising a convolution neural network, a recurrent neural network and a fully connected neural network, see Buibas ¶ [0135] – neural network employing any number of fully connected layers, convolutional layers, recurrent layers, or any other type of neurons or connections). Regarding Claims 6, Buibas teaches: The method of claim 1, wherein displaying the rendition of the display area comprises: displaying a real-time captured image of the detected first object and displaying the transaction value associated with the identified first object with the real-time captured image of the detected first object (See Buibas ¶ [0215-0216] – real-time tracking of users as they interact with items around them and Fig. 13 – showing users interacting with items and displaying a price [transaction value] of each item). Regarding Claims 7, Buibas teaches: The method of claim 6, further comprising: capturing the real-time captured image of the detected first object from the first camera (See Buibas ¶ [0135] – using cameras to track interactions of users with items in a store and [0215-0216] – real-time tracking of users as they interact with items around them). Regarding Claims 8, Buibas teaches: The method of claim 6, further comprising: capturing the real-time captured image of the detected first object by a second camera (See Buibas ¶ [0135] – using cameras to track interactions of users with items in a store, [0157] – using multiple cameras and [0215-0216] – real-time tracking of users as they interact with items around them). Regarding Claims 9, Buibas teaches: The method of claim 1, wherein displaying the rendition of the display area comprises: displaying a virtualized rendition of the display area, the virtualized rendition comprising a virtualized rendition of the detected first object and displaying the transaction value associated with the identified first object in an associated manner with the virtualized rendition of the detected first object (See Buibas Fig. 13 – showing users interacting with items and displaying a price [transaction value] of each item, wherein the price is associated with each item in the same dialog box respective of said items). Regarding Claims 10, Buibas teaches: The method of claim 1, further comprising: storing, by the one or more processors, one or more of data identifying the object, the first image, the representation of the first object, or the transaction value associated with the identified first object (See Buibas ¶ [0156] – calibrating images of user and item interaction in a store, calibrating said images based on multiple parameters and using said calibrated images to determine the types of user/ item interactions as said user and items move from one camera field of view to another, thereby showing data storage by example and Fig. 13 – showing users interacting with items and displaying a price [transaction value] of each item). Regarding Claims 11, Buibas teaches: The method of claim 10, further comprising: detecting, by the scale, a second change in weight; upon detecting the second change in weight, capturing, by the first imaging camera, a second image (See Buibas ¶ [0133] – shelves have weight sensors, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item, wherein a weight sensor is place under each item, Fig. 21 – showing a plurality of items, thereby showing a plurality of weight sensors); detecting, by the one or more processors, a second object in the second image, the second object not being in the first image (See Buibas ¶ [0186] – recognizing a “put” action of an item on a shelf that was not previously in that location) and having a value based on weight (See Buibas ¶ [0182] – a weight sensor may be placed under each item in the case to detect when the item is removed and data from the weight sensor may be transmitted to processor. Any type or types of sensors may be used to detect or confirm that a user takes an item. Detection of removal of a product, using any type of sensor, may be combined with tracking of a person using cameras in order to attribute the taking of a product to a specific user); identifying, by the one or more processors, the second object (See Buibas Fig. 21 – identifying item (111) as a soda can); segmenting, by the one or more processors, a portion of the second image corresponding to the detected second object (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items); providing the segmented portion of the second image to the trained machine learning model (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items); utilizing the trained machine learning model to identify the second object (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items); associating, by the one or more processors, the detected second change in weight with the identified second object; determining, by the first camera, a transaction value associated with the identified second object based on the detected second change in weight and the second object value (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0150-0152] – cameras detection of user interaction with items, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item and Fig. 13 – showing a rendition of a monitored display area and displaying prices [transaction value] of items selected by users); and displaying, by the display, a rendition of the display area, the rendition containing a representation of the second object and the transaction value associated with the identified second object based on the detected second change in weight (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item, wherein a weight sensor is place under each item, [0184] – displaying on a display at the exit of a store, a message [rendition] of items interacted with and charging an amount [transaction value] to a user associated with said item interaction and Fig. 21 – showing a plurality of items, thereby showing a plurality of weight sensors). Regarding Claims 12, Buibas teaches: The method of claim 11, wherein capturing the second image occurs in response to detecting a presence of the second object entering a field of view (FOV) of the first camera (See Buibas ¶ [0156] – image calibration based on item tracking from one camera field of view to another). Regarding Claims 13, Buibas teaches: The method of claim 1, wherein detecting the first change in weight of the display area comprises: detecting the first change in weight of the display area after a steady state condition is satisfied (It is noted that these limitations are conditional as they require that a steady state condition is satisfied. These limitations are not necessarily invoke based on the conditional nature of said limitations, therefore these limitations are not given patentable weight). Regarding Claims 14, Buibas teaches: The method of claim 13, wherein the steady state condition is one or more of a threshold time window or a threshold amount of weight increase (These limitations do not further functionally limit the method of claim 13 and merely describe the steady state condition [which is conditional and not given patentable weight], therefore, these limitations are considered non-functional descriptive material and are not given patentable weight). Regarding Claims 15, Buibas teaches: The method of claim 13, wherein detecting the first change in weight of the display area after the steady state condition comprises removing spurious measured weight values from the scale (It is noted that these limitations are conditional as they require that a steady state condition is satisfied, as well as dependency on conditional claim 13. These limitations are not necessarily invoke based on the conditional nature of said limitations, therefore these limitations are not given patentable weight). Regarding Claims 16, Buibas teaches: A system comprising: a scale, the scale detecting a first change in weight of a display area (As the specification defines a scale device as an electronic scale detecting changes in weight, see Buibas ¶ [0182] – using weight sensors [electronic scales] to detect item movement from a display shelf); a first camera, the first camera capturing a first image featuring the display area (See Buibas ¶ [0134-0135] – using image data to determine a person in a store has interacted with an item in an item storage area); one or more processors, the one or more processors: (i) training a machine learning model based on a set of training images (See Buibas ¶ [0139] – machine learning techniques used to train classification algorithms/ neural networks based on image data), (ii) detecting the first object in the first image, the first object having a value based on weight (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example), (iii) identifying one or more features of the detected first object in the image (See Buibas ¶ [0278-0279] – using features of items in a 3D space to identify said items), (iv) identifying the first object based on the one or more features (See Buibas ¶ [0278-0279] – using features of items in a 3D space to identify said items), (v) segmenting a portion of the first image corresponding to the detected first object (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items), (vi) providing the segmented portion of the first image to the trained machine learning model (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items), (vii) utilizing the trained machine learning model to identify the first object, the trained machine learning model being a deep learning neural network (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items and As the specification of the instant application describes a deep learning neural network as comprising a convolution neural network, a recurrent neural network and a fully connected neural network, see Buibas ¶ [0135] – neural network employing any number of fully connected layers, convolutional layers, recurrent layers, or any other type of neurons or connections), (viii) associating the detected first change in weight with the identified first object (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example), (ix) determining a transaction value associated with the identified first object based on the detected first change in weight and the second object value (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0150-0152] – cameras detection of user interaction with items, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item and Fig. 13 – showing a rendition of a monitored display area and displaying prices [transaction value] of items selected by users); and a display, the display displaying a rendition of the display area, the rendition containing a representation of the first object and the transaction value associated with the identified first object based on the detected first change in weight (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item and Fig. 13 – showing a rendition of a monitored display area and displaying prices [transaction value] of items selected by users). Regarding Claims 18, Buibas teaches: The system of claim 16, wherein: the scale is further caused to detect a second change in weight; the first camera is further caused to capture a second image upon the detection of the second change in weight (See Buibas ¶ [0133] – shelves have weight sensors, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item, wherein a weight sensor is place under each item, Fig. 21 – showing a plurality of items, thereby showing a plurality of weight sensor); the one or more processors are further caused to: (i) store one or more of data identifying the object, the first image, the representation of the first object, or the transaction value associated with the identified first object (See Buibas ¶ [0156] – calibrating images of user and item interaction in a store, calibrating said images based on multiple parameters and using said calibrated images to determine the types of user/ item interactions as said user and items move from one camera field of view to another, thereby showing data storage by example and Fig. 13 – showing users interacting with items and displaying a price [transaction value] of each item); (ii) detect a second object in the second image, the second object not being in the first image (See Buibas ¶ [0186] – recognizing a “put” action of an item on a shelf that was not previously in that location) and having a value based on weight (See Buibas ¶ [0182] – a weight sensor may be placed under each item in the case to detect when the item is removed and data from the weight sensor may be transmitted to processor. Any type or types of sensors may be used to detect or confirm that a user takes an item. Detection of removal of a product, using any type of sensor, may be combined with tracking of a person using cameras in order to attribute the taking of a product to a specific user), (iii) identify one or more features of the detected second object in the image (See Buibas ¶ [0278-0279] – using features of items in a 3D space to identify said items), (iv) identify the second object based on the one or more features (See Buibas ¶ [0278-0279] – using features of items in a 3D space to identify said items), (v) segment a portion of the second image corresponding to the detected seco object (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items), (vi) provide the segmented portion of the second image to the trained machine learning model (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items), (vii) utilize the trained machine learning model to identify the second object (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items), (viii) associate the detected second change in weight with the identified second object (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example and [0182] – a weight sensor may be placed under each item in the case to detect when the item is removed and data from the weight sensor may be transmitted to processor. Any type or types of sensors may be used to detect or confirm that a user takes an item. Detection of removal of a product, using any type of sensor, may be combined with tracking of a person using cameras in order to attribute the taking of a product to a specific user), (ix) determine a transaction value associated with the identified second object based on the detected second change in weight and the second object value (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0150-0152] – cameras detection of user interaction with items, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item and Fig. 13 – showing a rendition of a monitored display area and displaying prices [transaction value] of items selected by users); and the display is further caused to display a rendition of the display area, the rendition containing a representation of the second object and the transaction value associated with the identified second object based on the detected second change in weight (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item, wherein a weight sensor is place under each item, [0184] – displaying on a display at the exit of a store, a message [rendition] of items interacted with and charging an amount [transaction value] to a user associated with said item interaction and Fig. 21 – showing a plurality of items, thereby showing a plurality of weight sensors). Regarding Claims 19, Buibas teaches: A tangible, non-transitory computer-readable medium storing executable instructions, the instructions, when executed by one or more processors of a computer system, cause the computer system to (See Buibas ¶ [0128] – the system operating from instructions stored in a non-transitory computer-readable media): train a machine learning model based on a set of training images (See Buibas ¶ [0139] – machine learning techniques used to train classification algorithms/ neural networks based on image data); receive, from a scale, a detected first changed in weight of a display area; receive, from a first camera, a first image of the display area See Buibas ¶ [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item); detect a first object in the first image, the first object having a value based on weight (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example); identify one or more features of the detected first object in the image (See Buibas ¶ [0278-0279] – using features of items in a 3D space to identify said items), identify the first object based on the one or more features (See Buibas ¶ [0278-0279] – using features of items in a 3D space to identify said items), segment a portion of the first image corresponding to the detected first object (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items), provide the segmented portion of the first image to the trained machine learning model (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items), and utilize the trained machine learning model to identify the first object, the trained machine learning model being a deep learning neural network (See Buibas ¶ [0280-0282] – using example crops of images comprising interactions of users and items to train a neural network [machine learning model] to classify and identify said items and As the specification of the instant application describes a deep learning neural network as comprising a convolution neural network, a recurrent neural network and a fully connected neural network, see Buibas ¶ [0135] – neural network employing any number of fully connected layers, convolutional layers, recurrent layers, or any other type of neurons or connections); associate the detected first change in weight with the identified first object (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example); determine a transaction value associated with the identified first object based on the detected first change in weight and the first object value (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0150-0152] – cameras detection of user interaction with items, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item and Fig. 13 – showing a rendition of a monitored display area and displaying prices [transaction value] of items selected by users); generate a rendition of the display area, the rendition containing a representation of the first object and the transaction value associated with the identified first object based on the detected first change in weight (See Buibas ¶ [0133] – using weight sensors to detect a quantity of product that is taken of dispensed from a shelf, thereby showing a change in weight by example, [0182-0183] – using weight sensors [electronic scales] and cameras to detect item movement from a display shelf and charge an account of a user [transaction] that took said item and Fig. 13 – showing a rendition of a monitored display area and displaying prices [transaction value] of items selected by users); and transmit the rendition to a display device (See Buibas ¶ [0134] – transmitting a determination based on an interaction between a user and an item [rendition] to a display). Response to Arguments Applicant's arguments filed 09/15/2025 have been fully considered but they are not persuasive. Rejection under 35 U.S.C. § 101: The previous rejection of claims 1-3, 5-16 and 18-19 under 35 U.S.C. § 101 is withdrawn. Amended independent claims 1, 16 and 19 show integration into a practical application by reciting: Specific hardware components (scales, cameras, processors, displays) Steps for physically detecting weight changes, capturing images, segmenting and identifying objects, and displaying transaction values Real-time interaction with physical goods in a retail environment These elements may demonstrate that the claims are integrated into a practical application, which is a key consideration under USPTO’s Step 2A, Prong Two (Alice/Mayo framework). Amended independent claims 1, 16 and 19 show additional technical features that amount to significantly more than the abstract idea by specifying: The use of deep learning neural networks for object identification The association of physical weight changes with visual object identification Automated display and inventory management based on detected events Such technical details may help distinguish the claims from mere abstract ideas, especially if they are not routine or conventional in the field. Therefore, the claims recite a combination of hardware and software for automating a physical process while including technical features and tangible steps, which may support eligibility under § 101. Rejection under 35 U.S.C. § 102: The previous rejection under 35 U.S.C. § 102 of claims 1-3, 5-16 and 18-19 is maintained. The amendments to independent claims 1, 16 and 19, do not overcome the previous prior art combination of record. The applicant’s arguments do not consider all of the previously cited sections of Buibas, which have been updated in the current rejection under 35 U.S.C. § 102 to reflect the amendments to the claims as they are currently limited. Contrary to the applicant’s assertion that Buibas fails to teach “determining, by the first camera, a transaction value associated with the identified first object based on the detected first change in weight and the first object value” as recited in claim 1 (and similarly claims 16 and 19), these features remain to be taught by Buibas. It is noted that the applicant merely alleges lack of teaching by Buibas with no specific discussion of the reference in the arguments as filed. Nonetheless, as detailed in the current rejection under 35 U.S.C. § 102 above, ¶ [0133], [0182-0183] and Fig. 13 of Buibas teach these features by example by using cameras and weight sensors to detect a quantity of product that is taken or dispensed from a shelf [change in weight] to detect item movement from a display shelf and charge an account of a user [transaction] that took said item. Further, see below a detailed mapping in the independent claim: Training a machine learning model based on training images (Buibas ¶0139, 0136, 0140; Figs. 3–4] Neural networks trained with labeled before/after images to recognize items/actions; supports retraining with new examples); Detecting, by a scale, a first change in weight of a display area (Buibas ¶0133, 0183] Optional: “product shelf…may have weight sensors…to assist in detecting products taken, moved, or replaced.”); Capturing, by a camera, an image of the display area (Buibas Throughout; e.g., Figs. 1, 3, 53–63] Multiple cameras capture images of shelves/areas before and after shopper interaction; Detecting an object in the image (object having a value based on weight) (Buibas ¶0139, 0149, 0151, 0183] Detects items in images; value/price not always weight-based but can be (see inventory/quantity calculation in 0149, 0151, 0183); Identifying the object (Buibas [¶0139, 0149, 0151, 0183] Neural networks/classifiers output specific item identity based on images); Segmenting image portion corresponding to detected object (Buibas [¶0139, 0151, 0155, 0156, 0161, Figs. 3, 12, 62, 63] Segmentation and region-of-interest selection for item identification); Providing segmented image to trained ML model (Buibas [¶0139, 0151, 0162, Figs. 3, 12, 62, 63] Segmented/cropped image regions are input to neural networks/classifiers; Utilizing the trained model to identify the object (Buibas [¶0139, 0151, Figs. 3, 12, 62, 63] Neural network identifies item based on input image(s)); Associating detected change in weight with identified object (Buibas [¶0133, 0183, Fig. 21] If weight sensors are used, weight change can be associated with identified item; but Buibas emphasizes vision as primary, weight as optional); Determining transaction value based on weight change and identified object (Buibas [¶0133, 0149, 0183, 0144, 0151] System can calculate quantity/price/charge for items taken, including by weight if available); Displaying, by a display, a rendition of the display area with representation of object and transaction value (Buibas [¶0132, 0134, 0144, Figs. 2, 13, 213, 213b] System can display shopping cart, item list, or confirmation (including on shopper device or store display)); Storing data identifying object, image, representation, transaction value (Buibas [¶0134, 0139, 0140, 0150, 0154, 0162, Figs. 13, 213, 213b] System stores event data, images, item IDs, actions, transaction/charge data); Using deep learning neural network for object identification (Buibas [¶0139, 0140, 0151, 0155, Figs. 3, 4, 63] Deep learning (CNNs, Siamese networks, etc.) for image/object recognition); Real-time or virtual display of detected object and transaction value (Buibas [¶0134, 0144, Figs. 2, 13, 213, 213b] Real-time display of shopping cart, items, actions; may be on store display or shopper device). Dependent claims 2-3, 5-15 and 18, for their dependency on rejected base claims 1, 16 or 19, respectively, remain rejected as well for reasons noted above in the current rejection under 35 U.S.C. § 102. The applicant is generally reminded that prior art must be considered in its entirety (MPEP 2141.02 (VI)). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW S WERONSKI whose telephone number is (571)272-5802. The examiner can normally be reached M-F 8 am - 5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fahd A. Obeid can be reached at 5712703324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW S WERONSKI/Examiner, Art Unit 3627 /FAHD A OBEID/Supervisory Patent Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Nov 30, 2022
Application Filed
Sep 23, 2024
Non-Final Rejection — §102
Feb 25, 2025
Response Filed
May 09, 2025
Final Rejection — §102
Sep 15, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443938
Point-of-Sale (POS) Operation System
2y 5m to grant Granted Oct 14, 2025
Patent 12400247
REPRESENTING SETS OF ENTITITES FOR MATCHING PROBLEMS
2y 5m to grant Granted Aug 26, 2025
Patent 12367454
METHOD AND SYSTEM FOR VEHICLE MANAGEMENT
2y 5m to grant Granted Jul 22, 2025
Patent 12333614
QUALITY, AVAILABILITY AND AI MODEL PREDICTIONS
2y 5m to grant Granted Jun 17, 2025
Patent 12327393
SYSTEM AND METHOD FOR CAPTURING CONSISTENT STANDARDIZED PHOTOGRAPHS AND USING PHOTOGRAPHS FOR CATEGORIZING PRODUCTS
2y 5m to grant Granted Jun 10, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
10%
Grant Probability
29%
With Interview (+19.8%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 115 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month