DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Receipt is acknowledged of applicant’s amendment filed on 9/23/25. Claims 1, 11 and 19 amended. Claims 1-20 are pending and an action on the merits is as follows.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Garner US Publication No. 2021/0110371 cited in previous action in view of BOSSARD et al. US Publication No. 2022/0392209 cited in previous action in view of Dhankhar US Publication No. 2019/005743 cited in previous action in view of Migdal US Publication No. 2022/0277313 in view of Sanil et al. US Publication No. 20230368625
Re Claim 1, Garner discloses a method, comprising:
receiving at least one item image for an item that an operator of a terminal indicated as being a item of interest (P2, P126, , P139; The imaging system captures one or more images that are used to identify the product 1020 and/or capture identification information of the product P59, , This can allow the user (e.g., customer, worker, etc.) to select the appropriate product or identify that both are inaccurate) ;
providing the at least one item image as input to a machine learning model (MLM) (P62-P64, P72-P73; fig. 5 selecting images and controlling the input of image data into one or many machine learning model)) ;
receiving as output from the MLM a determination as to an item type for the item based on the at least one item image (Fig. 5 P66) ; and causing a transaction associated with the item on the terminal to be suspended when the item type outputted by the MLM does not indicate that the item is the item of interest (P59, expected for the product “label1”, and approximately 30% expected accuracy of product “label2”). Additionally or alternatively, the scores can be evaluated relative to a threshold difference or other consideration. For example, when there is a threshold difference in weighted scores the higher weighted aggregate identifier is selected. In some embodiments, the decision control circuit may cause both products to be presented on the portable device and/or present their relative scores. This can allow the user (e.g., customer, worker, etc.) to select the appropriate product or identify that both are inaccurate. Further, the decision control circuit may prevent a result from being output when a threshold consistency or consensus cannot be determined from the subset of frames and cause further evaluation 330 of subsequent captured frames and/or instruct the user to capture another set of frames of video content for the item. An error notification 332 may be generated when consistency cannot be identified, such as after a threshold number of subsets of frames are evaluated and a consistency cannot be obtained).
Although Garner discloses products and that thresholds may depended on the type of products (P90). Garner fails to specifically disclose that the products are produce items , that the image is received during a transaction workflow that is modified to detect when the operator of the terminal enters a selection into a transaction interface which identifies the item in a transaction as being the produce item, providing a technical solution that can detect when the operator is attempting to cheat during a transaction by claiming that a CPG item is a lower-cost produce item and wherein the MLM is trained to determine whether the item is a consumer packaged good (CPG) item type, wherein a CPG item type includes any of a deli item, a bakery item, a dairy item, consumable beverages, consumer packaged foods, non-food items, medications, plants, or flowers and wherein the MLM is trained on item images to classify the item as either a produce item or a non-produce item; wherein the MLM outputs a confidence score indicative of a determined likelihood that the item is a CPG item type.
However BOSSARD discloses machine learning models that include different machine learning models for detecting different types of objects, specifically a machine learning mode trained to classify or otherwise recognize fruits in images.(P35)
Given the teachings of BOSSARD it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner with receiving at least one item image for an item that an operator of a terminal indicated as being a produce item.
As suggested by BOSSARD doing so would provide a machine learning models that may include a universal machine learning model that is trained to classify or otherwise recognize any type of object detected in an image (P35).
Garner as modified by BOSSARD discloses all of the claimed limitations from above except for the at least item image received during a transaction workflow that is modified to detect when the operator of the terminal enters a selection into a transaction interface which identifies the item in a transaction as being the produce item, providing a technical solution that can detect when the operator is attempting to cheat during a transaction by claiming that a CPG item is a lower-cost produce item and wherein the MLM is trained to determine whether the item is a consumer packaged good (CPG) item type, wherein a CPG item type includes any of a deli item, a bakery item, a dairy item, consumable beverages, consumer packaged foods, non-food items, medications, plants, or flowers and wherein the MLM is trained on item images to classify the item as either a produce item or a non-produce item; wherein the MLM outputs a confidence score indicative of a determined likelihood that the item is a CPG item type.
Dhankhar discloses wherein the MLM is trained to determine whether the item is a consumer packaged good (CPG) item type, wherein a CPG item type includes any of a deli item, a bakery item, a dairy item, consumable beverages, consumer packaged foods, non-food items, medications, plants, or flowers (P21, See claims 14 and 28, and Abstract ).
Given the teachings of Dhankhar it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner and BOSSARD with wherein the MLM is trained to determine whether the item is a consumer packaged good (CPG) item type, wherein a CPG item type includes any of a deli item, a bakery item, a dairy item, consumable beverages, consumer packaged foods, non-food items, medications, plants, or flowers.
Doing so would provide an automated object recognition kiosk for retail checkout of fresh foods and provide a seamless retail checkout experience for better customer service (P6).
Garner as modified by BOSSARD as modified by Dhankhar discloses all of the claimed limitations from above except for the at least item image received during a transaction workflow that is modified to detect when the operator of the terminal enters a selection into a transaction interface which identifies the item in a transaction as being the produce item, providing a technical solution that can detect when the operator is attempting to cheat during a transaction by claiming that a CPG item is a lower-cost produce item wherein the MLM is trained on item images to classify the item as either a produce item or a non-produce item; wherein the MLM outputs a confidence score indicative of a determined likelihood that the item is a CPG item type.
However Migdal discloses wherein the MLM is trained on item images to classify the item as either a produce item or a non-produce item; wherein the MLM outputs a confidence score (confidence value or percentage )indicative of a determined likelihood that the item is a CPG item type (P22, P29-30, P36-38, P59, P99).
Given the teachings of Migdal it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner as modified by Bossard as modified by Dhankhar with wherein the MLM is trained on item images to classify the item as either a produce item or a non-produce item; wherein the MLM outputs a confidence score indicative of a determined likelihood that the item is a CPG item type.
Doing so would permit a fast, an efficient, and an accurate mechanism for providing an accurate pick list of potential produce items for presentation and provide a quick , accurate identifying means for verifying produce items during transactions at terminal (P22).
Garner as modified by BOSSARD, as modified by , Dhankhar as modified by Migdal fails to disclose for the at least one item image received during a transaction workflow that is modified to detect when the operator of the terminal enters a selection into a transaction interface which identifies the item in a transaction as being the produce item, providing a technical solution that can detect when the operator is attempting to cheat during a transaction by claiming that a CPG item is a lower-cost produce item.
Sanil discloses at least one item image received during a transaction workflow that is modified to detect when the operator of the terminal enters a selection into a transaction interface which identifies the item in a transaction as being the produce item, providing a technical solution that can detect when the operator is attempting to cheat during a transaction by claiming that a CPG item is a lower-cost produce item (Abstract, P3, P5, P33, P34, P39, P62, P95).
Given the teachings of Sanil it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner, BOSSARD , Dhankhar and Migdal with at least one item image received during a transaction workflow that is modified to detect when the operator of the terminal enters a selection into a transaction interface which identifies the item in a transaction as being the produce item, providing a technical solution that can detect when the operator is attempting to cheat during a transaction by claiming that a CPG item is a lower-cost produce item.
Doing so provides a combination of preventative measures, such as computer vision-based product detection and identification, inventory control, and employee training , to prevent shrinkage to protect their profitability and ensure the long-term success of the business (p34) .
Re Claim 2, Garner, BOSSARD , Dhankhar Migdal and Santil discloses the method of claim 1 and BOSSARD discloses receiving an override from the terminal indicating that the transaction was resumed, wherein the override is received responsive to an audit confirming that the item is an item of interest type associated with the item of interest (P59, the decision control circuit may cause both products to be presented on the portable device and/or present their relative scores. This can allow the user (e.g., customer, worker, etc.) to select the appropriate product or identify that both are inaccurate.
Re Claim 3, Garner, BOSSARD , Dhankhar , Migdal and Santil discloses the method of claim 2 and Garner discloses : flagging the at least one item image as feedback for subsequent training of the MLM ( P28, P88 and P119 one or more model training systems 110 that train one or more machine learning models and/or update training of such models to be distributed to mobile devices and/or to update models operating on mobile devices. The training, in some implementations, utilizes information provided by the mobile devices based on local image processing by the mobile devices through trained machine learning models being applied by the mobile devices. Additionally or alternatively, some mobile devices 102 and/or other image capturing systems can supply image and/or video data to the model training systems to be used in recognizing a product within the images and/or to further train the models based on the application of the models.).
Re Claim 4, Garner, BOSSARD , Dhankhar , Migdal and Santil discloses the method of claim 1, and Garner discloses wherein receiving the at least one item image further includes receiving an item code entered for the item by the operator at the terminal (P123, Product identification information is captured by and/or inputted into the portable device). BOSSARD discloses that an item is a produce code.
Re Claim 5, Garner, BOSSARD , Dhankhar , Migdal and Santil discloses the method of claim 4, and Garner discloses wherein providing further includes providing the item code as additional input to the MLM (P 128, 123).
Re Claim 6, Garner, BOSSARD , Dhankhar, Migdal and Santil discloses the method of claim 1, and Garner discloses wherein receiving the output further includes receiving a confidence value (score value ) as the output ( P49, P59, P71).
Re Claim 7 , Garner, BOSSARD , Dhankhar, Migdal and Santil discloses the method of claim 6, and Garner discloses wherein receiving the confidence value further includes obtaining a threshold confidence value (Score value is associated with a threshold consistency or consensus ) based on a store identifier associated with the terminal (P49, P59, P71, P75, wherein the model training system is configured to train one or more models for one or more customers or other users. Model customization may be implemented for groups of customers , one or more workers, and/or other user, and trained for a particular worker and/or for one or more tasks intended to be performed by workers, trained for locations within the retail store where workers are perform tasks, or the like. Thus the score value, rate or prioritizing one or more outputs from the machine learning models in evaluating the correlations between outputs from different models)
Re Claim 8, Garner, BOSSARD , Dhankhar , Migdal and Santil discloses the method of claim 7, and Garner discloses wherein causing further includes comparing the confidence value to the threshold confidence value and causing the transaction to be suspended when the confidence value is at or above the threshold confidence value(P49, P59). BOSSARD discloses indicating the item is a particular CPG and not the produce item (P35, P37, confidence level, the processor 202 may determine whether the confidence level is less than (and/or satisfies) a (e.g., predetermined) confidence level threshold or whether the confidence level for one classification is higher than the confidence level(s) for one or more other classifications by a threshold amount. In one or more implementations, the confidence level threshold may be determined in advanced to be different for different types of objects (wherein a universal machine learning model that is trained to classify or otherwise recognize any type of object detected in an image), such as based on the diversity and/or complexity of the object being classified, and/or the confidence level threshold may be determined in advanced to be the same across all different types of objects.
Re Claim 9, Garner, BOSSARD , Dhankhar , Migdal and Santil discloses the method of claim 7, and Garner discloses wherein causing further includes providing the item type and the threshold confidence value to the terminal causing the terminal to suspend the transaction when the confidence value is at or above the threshold confidence value indicating the item type (P49, P59) and BOSSARD discloses identify by identifying a particular CPG and not the produce item (P35, P37, confidence level, the processor 202 may determine whether the confidence level is less than (and/or satisfies) a (e.g., predetermined) confidence level threshold or whether the confidence level for one classification is higher than the confidence level(s) for one or more other classifications by a threshold amount. In one or more implementations, the confidence level threshold may be determined in advanced to be different for different types of objects (wherein a universal machine learning model that is trained to classify or otherwise recognize any type of object detected in an image), such as based on the diversity and/or complexity of the object being classified, and/or the confidence level threshold may be determined in advanced to be the same across all different types of objects.
Re Claim 10, Garner, BOSSARD , Dhankhar, Migdal and Santil discloses the method of claim 1, and Garner discloses wherein causing further includes providing the item type to the terminal causing the terminal to suspend the transaction based on the terminal determining the item type does not comport with the item being the item of interest (P59) and BOSSARD discloses wherein a universal machine learning model that is trained to classify or otherwise recognize any type of object detected in an image (P35, P37).
Re Claim 11, Garner discloses a method, comprising:
training a machine learning model (MLM) on images of items to provide item types for the items (P28,P82 supply image and/or video data to the model training systems to be used in recognizing a product within the images and/or to further train the models based on the application of the models.);
receiving at least one item image during a transaction at a terminal responsive to an operator of the terminal indicating that a transaction item associated with the at least one item image is a specific item of interest (P59, the decision control circuit may cause both products to be presented on the portable device and/or present their relative scores. This can allow the user (e.g., customer, worker, etc.) to the user (e.g., customer, worker, etc.) select the appropriate product) ;
obtaining a current item type for the transaction item from the MLM based on providing the at least one item image to the MLM as input (P27, The product recognition system 100 enables numerous portable user devices 102 (e.g., smartphones, tablets, etc.) operated by different users (e.g., customers, workers, etc.) to capture images and/or video content of a product and locally identify the product); and
making a determination as to whether to suspend the transaction for an audit of the transaction item when the current item type does not comport with a specific item type (P94, In those instances where a product attempting to be identified through the image and/or video content cannot be recognized, whether because it is a product that is not in the custom listing of products 634 or cannot otherwise be recognized because of one or more factors (e.g., portable device is moving too much, not capturing sufficient portion of the product, not capturing distinguishing portion(s) of the product, etc.), the application on the mobile device can inform the customer that the product could not be recognized and provide subsequent instructions.
Although Garner discloses products and that thresholds me be depended on the type of products (P90). Garner fails to specifically disclose wherein during training, the MLM is provided an actual item type classification that is expected as output for each of the images in a training data set, and through this training process, the MLM is configured to provide an item type classification based on an item image received during a transaction , that the products are produce items , obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item; and wherein the produce item type includes any of: fruits, vegetables, mushrooms, nuts, herbs, farm-produced items, items sold by weight without barcodes, or customer-ground coffee and wherein the MLM is trained on images depicting a plurality of items, where a subset of item images are of CPG items and a subset of item images are of produce items; wherein the images are preprocessed before being provided to the MLM for training by cropping background details associated with surfaces and surrounds of a given terminal from the images.
However BOSSARD discloses machine learning models that include different machine learning models for detecting different types of objects, specifically a machine learning mode trained to classify or otherwise recognize fruits in images.(P35)
Given the teachings of BOSSARD it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner with receiving at least one item image during a transaction at a terminal responsive to an operator of the terminal indicating that a transaction item associated with the at least one item image is a produce item and that an item type is a produce item type.
As suggested by BOSSARD doing so would provide a machine learning models that may include a universal machine learning model that is trained to classify or otherwise recognize any type of object detected in an image (P35).
Garner as modified by BOSSARD discloses all of the claimed limitations from above except for wherein during training, the MLM is provided an actual item type classification that is expected as output for each of the images in a training data set, and through this training process, the MLM is configured to provide an item type classification based on an item image received during a transaction , obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item; wherein the produce item type includes any of: fruits, vegetables, mushrooms, nuts, herbs, farm-produced items, items sold by weight without barcodes, or customer-ground coffee and wherein the MLM is trained on images depicting a plurality of items, where a subset of item images are of CPG items and a subset of item images are of produce items; wherein the images are preprocessed before being provided to the MLM for training by cropping background details associated with surfaces and surrounds of a given terminal from the images.
Dhankhar discloses wherein the produce item type includes any of: fruits, vegetables, mushrooms, nuts, herbs, farm-produced items, items sold by weight without barcodes, or customer-ground coffee (See claims 14 and 28, and Abstract ).
Given the teachings of Dhankhar it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner and BOSSARD with wherein the produce item type includes any of: fruits, vegetables, mushrooms, nuts, herbs, farm-produced items, items sold by weight without barcodes, or customer-ground coffee
Doing so would provide an automated object recognition kiosk for retail checkout of fresh foods and provide a seamless retail checkout experience for better customer service (P6).
Garner as modified by BOSSARD as modified by Dhankhar discloses all of the claimed limitations from above except for wherein during training, the MLM is provided an actual item type classification that is expected as output for each of the images in a training data set, and through this training process, the MLM is configured to provide an item type classification based on an item image received during a transaction , obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item; wherein the MLM is trained on images depicting a plurality of items, where a subset of item images are of CPG items and a subset of item images are of produce items; wherein the images are preprocessed before being provided to the MLM for training by cropping background details associated with surfaces and surrounds of a given terminal from the images.
Migdal discloses wherein the MLM is trained on images depicting a plurality of items, where a subset of item images are of CPG items (non produce item) and a subset of item images are of produce items (P29-30, P36-38, P59, P99); wherein the images are preprocessed before being provided to the MLM (P26-27)for training by cropping background details associated with surfaces and surrounds of a given terminal from the images (intended use).
Given the teachings of Migdal it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner as modified by Bossard as modified by Dhankhar with wherein the wherein the MLM is trained on images depicting a plurality of items, where a subset of item images are of CPG items and a subset of item images are of produce items; wherein the images are preprocessed before being provided to the MLM for training by cropping background details associated with surfaces and surrounds of a given terminal from the images.
Doing so would provide a quick , accurate identifying means for verifying produce items during transactions at terminal (P22).
Garner as modified by BOSSARD, as modified by , Dhankhar as modified by Migdal fails to disclose wherein during training, the MLM is provided an actual item type classification that is expected as output for each of the images in a training data set, and through this training process, the MLM is configured to provide an item type classification based on an item image received during a transaction , obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item.
Santil discloses wherein during training, the MLM is provided an actual item type classification that is expected as output for each of the images in a training data set, and through this training process, the MLM is configured to provide an item type classification based on an item image received during a transaction , obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item (Abstract, P3, P5, P33, P34, P39-P48, P62, P95)..
Given the teachings of Sanil it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner, BOSSARD , Dhankhar and Migdal with wherein during training, the MLM is provided an actual item type classification that is expected as output for each of the images in a training data set, and through this training process, the MLM is configured to provide an item type classification based on an item image received during a transaction , obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item obviate a need to rely on an operator's truthfulness in identifying a transacted item as a produce item.
Doing so provides a combination of preventative measures, such as computer vision-based product detection and identification, inventory control, and employee training , to prevent shrinkage to protect their profitability and ensure the long-term success of the business (P34).
Re Claim 12, Garner, BOSSARD , Dhankhar, Migdal and Sanil discloses the method of claim 11 and BOSSARD disclose : retaining the at least one item image when the current item type was incorrect and the transaction item did comport with the produce item type (P11, , P13, P35-37, P50).
Re Claim 13, Garner, BOSSARD , Dhankhar, Migdal and Sanil discloses the method of claim 11 and BOSSARD discloses iterating to the training of the MLM using the at least one item image and an indication that the transaction item is associated with the produce item type (P11, P13, P35-37, P50).
Re Claim 14, Garner, BOSSARD , Dhankhar, Migdal and Sanil discloses the method of claim 11, and Migdal discloses wherein training further includes training the MLM to produce a consumer packaged good (CPG) item type or a non-CPG item type for each of the items (P22, P36, P99)
Re Claim 15, Garner, BOSSARD , Dhankhar, Migdal and Sanil discloses the method of claim 14, and BOSSARD discloses wherein training the MLM further includes training each of a plurality of additional MLMs to produce a respective corresponding sub-CPG item type for each item identified as a CPG item type by the MLM (P35-P37).
Re Claim 16, Garner, BOSSARD , Dhankhar, Migdal and Sanil discloses the method of claim 14, and BOSSARD discloses wherein training the MLM further includes training the MLM to produce any of a plurality of sub-CPG item types for each item identified as a CPG item type (P35-P37).
Re Claim 17, Garner, BOSSARD , Dhankhar, Migdal and Sanil discloses the method of claim 11, and Garner discloses wherein training further includes training the MLM to produce the item types further based on operator-provided price lookup (PLU) codes (P67, P118, P128, P129; Figs. 12 and 14).
Re Claim 18, Garner, BOSSARD , Dhankhar, Migdal and Sanil discloses the method of claim 17, and Garner discloses wherein obtaining further includes providing the operator-provided PLU codes as additional input to the MLM (PLU) codes (P67, P118, P128, P129; Figs. 12 and 14).
Re Claim 19, Garner discloses a system, comprising: at least one server comprising at least one processor and a non- transitory computer-readable storage medium; the non-transitory computer-readable storage medium comprising executable instructions; and the executable instructions when executed by at least one processor cause the at least one processor to perform operations, comprising: receiving one or more images of an item that was identified by an operator of a terminal during a transaction as being a specific item(P2, P126, , P139; The imaging system captures one or more images that are used to identify the product 1020 and/or capture identification information of the product P59, This can allow the user (e.g., customer, worker, etc.) to select the appropriate product or identify that both are inaccurate);
processing a machine learning model MLM with the one or more images provided as input (P62-P64, P72-P73; fig. 5 selecting images and controlling the input of image data into one or many machine learning model) and
receiving as output from the MLM an item type for the item(Fig. 5 P66); and
causing the transaction to be suspended when the item type outputted by the MLM does not comport with a specific item type associated with the specific item(P59, expected for the product “label1”, and approximately 30% expected accuracy of product “label2”). Additionally or alternatively, the scores can be evaluated relative to a threshold difference or other consideration. For example, when there is a threshold difference in weighted scores the higher weighted aggregate identifier is selected. In some embodiments, the decision control circuit may cause both products to be presented on the portable device and/or present their relative scores. This can allow the user (e.g., customer, worker, etc.) to select the appropriate product or identify that both are inaccurate. Further, the decision control circuit may prevent a result from being output when a threshold consistency or consensus cannot be determined from the subset of frames and cause further evaluation 330 of subsequent captured frames and/or instruct the user to capture another set of frames of video content for the item. An error notification 332 may be generated when consistency cannot be identified, such as after a threshold number of subsets of frames are evaluated and a consistency cannot be obtained).
Although Garner discloses products and that thresholds may depended on the type of products (P90). Garner fails to specifically disclose that the products are produce items from a scanner or a camera of the terminal and produce item type ; provide an efficient and accurate item type classification that addresses a particular type of retail shrinkage -customer identification of anon-produce item as a lower-cost produce item during a self-checkout and wherein the MLM is specifically trained to detect retail shrinkage caused by customer identification of non-produced items as lower-cost produce items during self-checkouts and wherein the MLM is trained on both item images and corresponding PLU codes to determine whether a particular item depicted in a particular item image corresponds to an operator- provided PLU code..
However BOSSARD discloses machine learning models that include different machine learning models for detecting different types of objects, specifically a machine learning mode trained to classify or otherwise recognize fruits in images.(P35)
Given the teachings of BOSSARD it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner with the products being a produce items and produce item type.
As suggested by BOSSARD doing so would provide a machine learning models that may include a universal machine learning model that is trained to classify or otherwise recognize any type of object detected in an image (P35).
Garner as modified by BOSSARD discloses all of the claimed limitations from above except for wherein the machine learning model is specifically trained to detect retail shrinkage caused by customer identification of non-produced items as lower-cost produce items during self-checkouts and wherein the MLM is trained on both item images and corresponding PLU codes to determine whether a particular item depicted in a particular item image corresponds to an operator- provided PLU code, that the products are produce items from a scanner or a camera of the terminal; provide an efficient and accurate item type classification that addresses a particular type of retail shrinkage -customer identification of anon-produce item as a lower-cost produce item during a self-checkout.
Dhankhar discloses wherein the machine learning model is specifically trained to detect retail shrinkage caused by customer identification of non-produced items as lower-cost produce items during self-checkouts (P9, and see claims 14 and 28, and Abstract , Dhankhar discloses a weight sensor in communication with a controller and configure to measure weight of the object in combination with a machine- learning model ).
Given the teachings of Dhankhar it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner and BOSSARD with wherein the machine learning model is specifically trained to detect retail shrinkage caused by customer identification of non-produced items as lower-cost produce items during self-checkouts.
Doing so would provide a seamless retail checkout experience (P6)
Garner as modified by BOSSARD as modified by Dhankhar discloses all of the claimed limitations from above except for wherein the MLM is trained on both item images and corresponding PLU codes to determine whether a particular item depicted in a particular item image corresponds to an operator- provided PLU code , that the products are produce items from a scanner or a camera of the terminal, provide an efficient and accurate item type classification that addresses a particular type of retail shrinkage -customer identification of anon-produce item as a lower-cost produce item during a self-checkout.
Migdal discloses wherein the MLM is trained on both item images and corresponding PLU codes to determine whether a particular item depicted in a particular item image corresponds to an operator- provided PLU code(P22, P29-30, P36-38, P59, P99).
Given the teachings of Migdal it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner as modified by Bossard as modified by Dhankhar with wherein the MLM is trained on both item images and corresponding PLU codes to determine whether a particular item depicted in a particular item image corresponds to an operator- provided PLU code.
Doing so would provide a quick , accurate identifying means for verifying produce items during transactions at terminal (P22).
Garner as modified by Bossard , as modified by Dhankhar, as modified by Migdal fails to specifically discloses that the products are produce items from a scanner or a camera of the terminal, provide an efficient and accurate item type classification that addresses a particular type of retail shrinkage -customer identification of anon-produce item as a lower-cost produce item during a self-checkout.
Sanil discloses the products are produce items from a scanner or a camera of the terminal, provide an efficient and accurate item type classification that addresses a particular type of retail shrinkage -customer identification of anon-produce item as a lower-cost produce item during a self-checkout Abstract, P3, P5, P33, P34-37, P39-P48, P62, P95).
Given the teachings of Sanil it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garner, BOSSARD , Dhankhar and Migdal with produce items from a scanner or a camera of the terminal, provide an efficient and accurate item type classification that addresses a particular type of retail shrinkage -customer identification of anon-produce item as a lower-cost produce item during a self-checkout
Doing so provides a combination of preventative measures, such as computer vision-based product detection and identification, inventory control, and employee training , to prevent shrinkage to protect their profitability and ensure the long-term success of the business (P34).
Re Claim 20, Garner, Bossard , Dhankhar, Migdal and Sanil discloses the system of claim 19, and Garner discloses wherein the transaction terminal is a self- service terminal operated by a customer during the transaction or the transaction terminal is a point-of-sale terminal operated by a cashier on behalf of the customer during the transaction (P30).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 11 and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant amended the claims with new limitations which necessitated new search and consideration. Therefore this action is made final.
Conclusion
The following reference is cited but not relied upon:
Pandley discloses a system and method are provided for computer vision (CV) driven applications within an environment of a preferred embodiment function to seamlessly monitor, track, and account for objects within an observed space.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SONJI N JOHNSON whose telephone number is (571)270-5266. The examiner can normally be reached 9am-9pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steven Paik can be reached at 5712722404. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SONJI N. JOHNSON
Examiner
Art Unit 2876
/SONJI N JOHNSON/Primary Examiner, Art Unit 2876