Prosecution Insights
Last updated: April 19, 2026
Application No. 18/906,556

AUTOMATIC ITEM IDENTIFICATION DURING ASSISTED CHECKOUT

Final Rejection §103
Filed
Oct 04, 2024
Examiner
WERONSKI, MATTHEW S
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Radiusai Inc.
OA Round
2 (Final)
10%
Grant Probability
At Risk
3-4
OA Rounds
4y 0m
To Grant
29%
With Interview

Examiner Intelligence

Grants only 10% of cases
10%
Career Allow Rate
11 granted / 115 resolved
-42.4% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
32 currently pending
Career history
147
Total Applications
across all art units

Statute-Specific Performance

§101
31.5%
-8.5% vs TC avg
§103
37.7%
-2.3% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 115 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority For the purpose of examination herein with regard to prior art consideration, the effective filing date of the instant application is based on the provisional application filed on October 4th, 2023. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gao et al. (US 2024/0220999 A1) in view of Coelho Pinto et al. (US2025/0218237 A1) and Habib (US 2023/0316786 A1). Regarding Claim 1 and 11, modified Gao teaches: A system/ method for automatically identifying a plurality of items positioned at a point of sale (POS) system based on a plurality of item parameters associated with each item as provided by a plurality of images captured by a plurality of cameras positioned at the POS system (See Gao ¶ [0024-0025] – a data reading system for scanning items at a self-checkout system that reads optical codes and image data to identify multiple types of items at said self-checkout system and [0031-0033] – data readers of the POS system include multiple cameras and imagers), comprising: at least one processor (See Gao ¶ [0027] – a processor); a memory coupled with the at least one processor, the memory including instructions that, when executed by the at least one processor cause the at least one processor to (See Gao ¶ [0027] – a database [memory] coupled to a processor in an external database for decoding coded information from captured images): extract a plurality of Universal Product Code (UPC) features included in a plurality of item parameters associated with each item positioned on the POS system from the plurality of images captured of each item by the plurality of cameras positioned at the POS system (See Gao ¶ [0002] – UPCs are examples of optical codes, which are referred to as barcodes throughout Gao and are understood to be functionally equivalent for the purpose of examination herein, [0031-0033] – data readers of the POS system include multiple cameras and imagers, [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] captured in said image and [0049] – one or more images are of each item are obtained by the data reading system, which comprises multiple cameras as noted above), wherein the UPC features associated with each item when combined are indicative as to an identification of the UPC of each item (See Gao ¶ [0049-0050] – one or more images comprising barcode data are stitched together to identify each item read by the data reading system), … item parameters to a server as the UPC features included in the item parameters are extracted from a plurality of pixels of each image by an item identification system (See Gao ¶ [0041-0042] – image data stream is routed to external servers for analysis, [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] captured in said image and [0054] – pixel based image analysis), automatically receive updated item parameters as the UPC features included in the item parameters are … as the UPC features are extracted from the pixels based on past item parameters generated from past UPC features extracted from past images of past items (See Gao ¶ [0071] – a Siamese Neural Network may be used, which requires ongoing [continuous] network training for optimal performance and [0081] – The data reading system includes the data reader and a remote host (e.g., data center) that may be configured to perform updates such as OCR web services, NN [neural network] training, etc. With reference to FIG. 11B, the update process may be in response to a trigger [automatically receive] (e.g., add a new item, low accuracy of an item, new package for pre-existing items, etc.) in which the remote host may receive image data (e.g., images, extracted features from images, locally generated descriptors from such images, user validation information, etc.). The collected image data may be used by the remote host to update the models and/or the descriptors used by the models. The updates provided to the data reader may result in a model being updated on the data reader or in the local descriptor database being updated on the data reader); analyze the UPC features associated with each item positioned at the POS system … (See Gao ¶ [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] captured in said image and [0049-0051] – one or more images comprising barcode data are stitched together to identify each item read by the data reading system, wherein said stitched together image data is compared to reference image data of said barcode), wherein the item parameter identification database stores different combinations of UPC features with each different combination of UPC features associated with a corresponding item thereby identifying each corresponding item based on each different combination of UPC features associated with each item (See Gao ¶ [0049-0051] – using descriptors from text or alphanumeric characters located relative to barcodes to aid in identifying items, which is a type of UPC feature as described by the specification of the instant application), … the item parameters as the UPC features included in the item parameters are analyzed … (See Gao ¶ [0041] – image data stream is routed to external servers for analysis, [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] captured in said image and [0053-0055] – comparing scanned images of extracted item features with reference image data to verify identification of items in said scanned images), identify each corresponding item positioned at the POS system when the UPC features associated with each item when combined match a corresponding combination of UPC features as stored in the item parameter identification database (See Gao ¶ [0053-0054] – cropping image data of a region of interest of a product label [combination of UPC features], including an optical code [barcode/ UPC] to match a corresponding region of a reference item optical code) and integrating the updated item parameters as the updated UPC features received from the neural network in identifying each corresponding item positioned at the POS system (See Gao ¶ [0081] – The data reading system includes the data reader and a remote host (e.g., data center) that may be configured to perform updates such as OCR web services, NN [neural network] training, etc. With reference to FIG. 11B, the update process may be in response to a trigger [automatically receive] (e.g., add a new item, low accuracy of an item, new package for pre-existing items, etc.) in which the remote host may receive image data (e.g., images, extracted features from images, locally generated descriptors from such images, user validation information, etc.). The collected image data may be used by the remote host to update the models and/or the descriptors used by the models [integrating the updated item parameters by example]. The updates provided to the data reader may result in a model being updated on the data reader or in the local descriptor database being updated on the data reader), and stream the updated item parameters as the updated UPC features are updated (See Gao ¶ [0041] – image data stream is routed to external servers for analysis, [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] captured in said image and [0045] – item descriptor databases are updated) based on whether the UPC features associated with each item when combined match the corresponding combination of UPC features as stored in the item parameter identification database (See Gao ¶ [0057-0059] – the processor tallies the number of matching optical characters present in the cropped image and the descriptor associated to the reference image data and computes an item match score corresponding to a match rate for those optical characters) to the server for the neural network to incorporate into identifying additional item parameters for additional items that have additional UPC features based on past item parameters that include past UPC features (See Gao ¶ [0041] – image data stream is routed to external servers for analysis, [0045] – item descriptor databases are updated and [0081] – The collected image data may be used by the remote host to update the models and/or the descriptors used by the models. The updates provided to the data reader may result in a model being updated on the data reader or in the local descriptor database being updated on the data reader). While Gao teaches using machine learning for image feature extraction of one or more images of items at a checkout system by one or more cameras and streaming the image data to external servers for analysis, (Gao ¶ [0033], [0041-0043], [0049-0059]), Gao does not explicitly teach that said data is streamed continuously to be continuously trained on by a neural network based on machine learning as the neural network continuously updates the item parameters. This is taught by Coelho Pinto (See ¶ [0097-0100] – This visual identification uses Convolutional Neural Networks (CNN) also known as Shift Invariant or Space Invariant Artificial Neural Networks… Continual Learning is the ability of a model to learn continuously from a stream of data. In practice, this means supporting a model's ability to learn and adapt autonomously in use as new data arrives. This process is made possible by the training functionality that cabinet provides. Since when a new product is created the machine learns its characteristics, in use/sales when an interaction is classified it is fed back into the learning cycle so that it learns over time and its accuracy tends to 100%... these algorithms are active whenever the equipment is in use (purchase, train, replenish) causing the data to be captured and this process continues to improve). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the machine learning based image feature extraction and streaming system of Gao the use of continuous image streaming for continuous learning of a machine learning model as taught by Coelho Pinto to improve the accuracy of the machine learning model (Coelho Pinto ¶ [0099-0100] and [0118]). Thereby improving the accuracy of the machine learning based image feature extraction and streaming system of Gao. While Gao teaches using machine learning for image feature extraction of one or more images of items at a checkout system by one or more cameras and stitching said one or more images together to compare image data of a scanned item with stored reference image data to identify said scanned item for a checkout transaction, (Gao ¶ [0033], [0043], [0049-0059]), Gao does not explicitly teach fusing each different set of UPC features from each to determine whether the UPC features associated with each item when combined matches the corresponding combination of the UPC features stored in item parameter identification database. This is taught by Habib (See ¶ [0037-0039] – an image concatenation module identifying five different images and the features therein corresponding to five different cameras and stitching each of the five images into a single image, wherein said concatenated single image is a composite of the five images and the respective features therein, thereby increasing the set of features by example, which is illustrated in Figs. 5, 6A and 6B and [0049] - The other data can include item logos or color schemes from the concatenated image, item labels or barcodes [UPC features by example], and the like). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the machine learning based image feature extraction and streaming system of Gao the use of image concatenation to form a single image from multiple different images as taught by Habib to improve the object recognition accuracy in circumstances where items are fully or partially obstructed from view by one or more cameras (Habib ¶ [0022]). Thereby improving the accuracy of the machine learning based image feature extraction and streaming system of Gao. Regarding Claim 2 and 12, modified Gao teaches: The system/ method of claim 1 and 11, wherein the processor is further configured to: extract the plurality of item parameters associated with each item positioned at the POS system from the plurality of images captured of each item by the plurality of cameras positioned at the POS system (See Gao ¶ [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] captured in said image and [0049-0051] – one or more images comprising barcode data are stitched together to identify each item read by the data reading system, wherein said stitched together image data is compared to reference image data of said barcode), wherein the item parameters associated with each item when combined with the extracted UPC features associated with each item are indicative as to the identification of each corresponding item thereby enabling the identification of each corresponding item (See Gao ¶ [0049-0051] – using descriptors from text or alphanumeric characters located relative to barcodes to aid in identifying items, which is a type of UPC feature as described by the specification of the instant application). Regarding Claim 3 and 13, modified Gao teaches: The system/ method of claim 2 and 12, wherein the processor is further configured to: analyze the item parameters associated with each item positioned at the POS system in combination with the UPC features associated with each item positioned at the POS system to determine whether the item parameters when combined with the combination of UPC features matches a corresponding combination of the item parameters stored in the item parameter identification database (See Gao ¶ [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] captured in said image, [0049-0051] – one or more images comprising barcode data are stitched together to identify each item read by the data reading system, wherein said stitched together image data is compared to reference image data of said barcode and [0053-0054] – cropping image data of a region of interest of a product label [combination of UPC features], including an optical code [barcode/ UPC] to match a corresponding region of a reference item optical code), wherein the item parameter identification database stores different combinations of item parameters with different combinations of UPC features with each different combination of item parameters when combined with each different combination of UPC features are associated with a corresponding item thereby identifying each corresponding item based on each different combination of item parameters when combined with each different combination of UPC features (See Gao ¶ [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] captured in said image, [0049-0051] – one or more images comprising barcode data are stitched together [combined] to identify each item read by the data reading system, wherein said stitched together image data is compared to reference image data of said barcode and [0053-0057] – using optical code data to identify items in combination with OCR techniques to capture multiple types of text identifying an item, thereby providing multiple combinations of item parameters and UPC features captured by image data to identify said item). Regarding Claim 4 and 14, modified Gao teaches: The system/ method of claim 3 and 13, wherein the processor is further configured to: identify each corresponding item positioned at the POS system when the item parameters associated with each item when combined with the UPC features associated with each item match a corresponding combination of item parameters and UPC features as stored in the item parameter identification database (See Gao ¶ [0053-0057] – using optical code data and OCR techniques to match captured item data with reference [stored] item data for identifying said item based on a threshold match score). Regarding Claim 5 and 15, modified Gao teaches: The system/ method of claim 1 and 11, wherein the processor is further configured to: extract a plurality of text features from the plurality of images captured of each item by the plurality of cameras positioned at the POS system that includes a plurality of digits and a barcode of each UPC of each item positioned at the POS (See Gao ¶ [0043] – feature extraction from image analysis of items scanned for a checkout transaction by processing an optical code [UPC/barcode] and other information [text] captured in said image, [0049-0051] – one or more images from multiple data readers [cameras as noted above in claim 1] comprising barcode data are stitched together [combined] to identify each item read by the data reading system, wherein said stitched together image data is compared to reference image data of said barcode, Fig. 4 – capturing barcode data and Fig. 7 – capturing a plurality of digits relating to said barcode), wherein the digits of each UPC and the barcode of each UPC captures from the plurality of images of each item is partial digits of each UPC and the barcode of each UPC captured from the plurality of images of each item is a partial barcode of each UPC (See Gao Fig. 7 – showing bounding boxes around partial segments of the UPC digits as well as partial segments of the UPC barcode). Regarding Claim 6 and 16, modified Gao teaches: The system/ method of claim 5 and 15, wherein the processor is further configured to: compare the partial digits of each UPC and the partial barcode of each UPC as captured from the images of each item to a complete set of digits and a complete barcode of each UPC associated with each item stored in the item parameter identification database (As the specification of the instant application describes the partial digits and partial barcode of each UPC as the captured digits and barcode of said UPC and the complete set of digits and complete barcode of each UPC as the reference digits and barcode of each UPC, see Gao ¶ [0049] – stitching together images of barcodes and numeric or alphanumeric characters to obtain item data, thereby showing capture of partial digits and partial barcodes as the parts are stitched together to form said item data, [0053-0057] – using optical code data and OCR techniques to capture item data from a portion of a product label on said item and compare said captured item data with reference [stored] item data for identifying said item based on a threshold match score and Fig. 7 – showing bounding boxes around partial segments of the UPC digits as well as partial segments of the UPC barcode), wherein the item parameter identification database stores the complete set of digits and the complete barcode of each UPC associated with each item (See Gao ¶ [0051-0052] – the reference image data comprises text descriptor information comprising text size, font type and location relative to a barcode reference point and other suitable information comprising a set of reference information, Fig. 5 – reference image of complete barcode and surrounding text/ digits , Fig. 6 – capture of partial text surrounding said barcode and Fig. 7 – showing bounding boxes around partial segments of the UPC digits as well as partial segments of the UPC barcode as described by ¶ [0059-0060]); determine whether the partial digits of each UPC and the partial barcode of each UPC as captured from the images of each item when combined matches a corresponding combination of digits and barcode included in the complete set of digits and the complete barcode of each UPC as stored in the item parameter identification database (See Gao ¶ [0049] – stitching together images of barcodes and numeric or alphanumeric characters to obtain item data, thereby showing capture of partial digits and partial barcodes as the parts are stitched together to form said item data, [0053-0059] – using optical code data and OCR techniques to capture item data from a portion of a product label on said item and compare said captured item data with reference [stored] item data for identifying said item based on a threshold match score); and identify each corresponding item positioned at the POS system when the partial digits of each UPC and the partial barcode of each UPC when combined match a corresponding combination of digits and barcode included in the complete set of digits and the complete barcode of each UPC as stored in the item parameter identification database (See Gao ¶ [0049-0059] – an item is identified when a match score between captured barcode and text data is above a certain threshold when compared to stored reference barcode and text data). Regarding Claim 7 and 17, modified Gao teaches: The system/ method of claim 6 and 16, wherein the processor is further configured to: determine a location of the partial digits of each UPC as captured from the images of each item as positioned relative to the complete set of digits of each UPC and a location of the partial barcode of each UPC as captured from the images of each item as positioned relative to the complete barcode of each UPC (As the specification of the instant application describes the partial digits and partial barcode of each UPC as the captured digits and barcode of said UPC and the complete set of digits and complete barcode of each UPC as the reference digits and barcode of each UPC, see Gao ¶ [0049-0059] – stitching together images of barcodes and numeric or alphanumeric characters to obtain item data, thereby showing capture of partial digits and partial barcodes as the parts are stitched together to form said item data. Among relevant item data is text location information determined by coordinates relative to a barcode reference point. Captured barcode and text location data is then compared to reference data for item identification); determine whether the location of the partial digits of each UPC and the location of the partial barcode of each UPC when combined matches a corresponding location of digits and barcode included in the complete set of digits and the complete barcode of each UPC as stored in the item parameter identification database (See Gao ¶ [0049-0059] – stitching together images of barcodes and numeric or alphanumeric characters to obtain item data, thereby showing capture of partial digits and partial barcodes as the parts are stitched together to form said item data. Among relevant item data is text location information determined by coordinates relative to a barcode reference point. Captured barcode and text location data is then compared/ matched to reference data for item identification); and identify each corresponding item positioned at the POS system when the location of the partial digits of each UPC and the location of the partial barcode of each UPC when combined match a corresponding location of digits and location of barcode included in the complete set of digits and the complete barcode of each UPC as stored in the item parameter identification database (See Gao ¶ [0049-0059] – stitching together images of barcodes and numeric or alphanumeric characters to obtain item data, thereby showing capture of partial digits and partial barcodes as the parts are stitched together to form said item data. Among relevant item data is text location information determined by coordinates relative to a barcode reference point. Captured barcode and text location data is then compared/ matched to stored reference data for item identification). Regarding Claim 8 and 18, modified Gao teaches: The system/ method of claim 1 and 11, wherein the processor is further configured to: receive a plurality of images of the UPC features of each item positioned on the POS system by a plurality of cameras positioned at the UPC system, wherein each different image captured by each different camera captures … UPC features of each item positioned at the UPC system (See Gao ¶ [0033] – data readers maybe monochrome or color cameras for capturing monochrome or color images, thereby showing different cameras capturing different images, [0039-0040] – image readers/ cameras are part of a POS system, [0043] – image data comprising optical codes is analyzed with feature extraction and other techniques, [0049] – stitching together one or more images of barcodes and numeric or alphanumeric characters to obtain item data from items processed for checkout [items positioned at the UPC system]); and extract … UPC features from each different image captured by each different camera, wherein … UPC features captured by each different camera is … UPC features depicting … each corresponding UPC associated with each item positioned on the POS system (See Gao ¶ [0033] – data readers maybe monochrome or color cameras for capturing monochrome or color images, thereby showing different cameras capturing different images, [0043] – image data comprising optical codes is analyzed with feature extraction and other techniques, [0049] – stitching together one or more images of barcodes and numeric or alphanumeric characters to obtain item data from items processed for checkout [items positioned at the UPC system]). While Gao teaches using image feature extraction of one or more images of items at a checkout system by one or more cameras and stitching said one or more images together to compare image data of a scanned item with stored reference image data to identify said scanned item for a checkout transaction, (Gao ¶ [0033], [0043], [0049-0059]), Gao does not explicitly teach capturing a different set of UPC features comprising a different image clip depicting different partial features. This is taught by Habib (See ¶ [0038-0039] – an image concatenation module identifying five different images corresponding to five different cameras and stitching each of the five images into a single image, which is illustrated in Figs. 5, 6A and 6B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the image feature extraction system of Gao the use of image concatenation to form a single image from multiple different images as taught by Habib to improve the object recognition accuracy in circumstances where items are fully or partially obstructed from view by one or more cameras (Habib ¶ [0022]). Regarding Claim 9 and 19, modified Gao teaches: The system/ method of claim 8 and 18, wherein the processor is further configured to: fuse … UPC features from each … of UPC features depicting …each UPC associated with each item as extracted from each … image captured by each different camera of the UPC features of each item positioned on the POS system, wherein each … set of UPC features from each … of UPC features depicting … each UPC when fused together depict … UPC features of each UPC associated with each item (See Gao ¶ [0033] – data readers maybe monochrome or color cameras for capturing monochrome or color images, thereby showing different cameras capturing different images, [0039-0040] – image readers/ cameras are part of a POS system, [0043] – image data comprising optical codes is analyzed with feature extraction and other techniques, [0049] – stitching together [fusing] one or more images of barcodes and numeric or alphanumeric characters to obtain item data from items processed for checkout [items positioned at the UPC system]). While Gao teaches using image feature extraction of one or more images of items at a checkout system by one or more cameras and stitching said one or more images together to compare image data of a scanned item with stored reference image data to identify said scanned item for a checkout transaction, (Gao ¶ [0033], [0043], [0049-0059]), Gao does not explicitly teach fusing each different set of UPC features from each different image clip depicting different partial features when fused together depict an increased set of features. This is taught by Habib (See ¶ [0037-0039] – an image concatenation module identifying five different images and the features therein corresponding to five different cameras and stitching each of the five images into a single image, wherein said concatenated single image is a composite of the five images and the respective features therein, thereby increasing the set of features by example, which is illustrated in Figs. 5, 6A and 6B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the image feature extraction system of Gao the use of image concatenation to form a single image from multiple different images as taught by Habib to improve the object recognition accuracy in circumstances where items are fully or partially obstructed from view by one or more cameras (Habib ¶ [0022]). Regarding Claim 10 and 20, modified Gao teaches: The system/ method of claim 9 and 19, wherein the processor is further configured to: compare each fused set of UPC features from each … of UPC features of each UPC to a complete set of UPC features of each UPC associated with each item stored in the item parameter identification database (As the specification of the instant application describes the partial digits and partial barcode of each UPC as the captured digits and barcode of said UPC and the complete set of digits and complete barcode of each UPC as the reference digits and barcode of each UPC, see Gao ¶ [0049] – stitching together images of barcodes and numeric or alphanumeric characters to obtain item data, thereby showing a fused set of UPC features, [0053-0057] – using optical code data and OCR techniques to capture item data from a portion of a product label on said item and compare said captured item data with reference [stored] item data for identifying said item based on a threshold match score and Fig. 7 – showing bounding boxes around partial segments of the UPC digits as well as partial segments of the UPC barcode), wherein the item parameter identification database stores the complete set of UPC features of each UPC associated with each item (See Gao ¶ [0051-0052] – the reference image data comprises text descriptor information comprising text size, font type and location relative to a barcode reference point and other suitable information comprising a set of reference information, Fig. 5 – reference image of complete barcode and surrounding text/ digits, Fig. 6 – capture of partial text surrounding said barcode and Fig. 7 – showing bounding boxes around partial segments of the UPC digits as well as partial segments of the UPC barcode as described by ¶ [0059-0060]); determine whether each fused set of UPC features from each … of UPC features of each UPC when combined matches a corresponding set of UPC features in the complete set of UPC features of each UPC as stored in the item parameter identification database (See Gao ¶ [0049] – stitching together images of barcodes and numeric or alphanumeric characters to obtain item data, thereby showing a fused set of UPC features, [0053-0059] – using optical code data and OCR techniques to capture item data from a portion of a product label on said item and compare said captured item data with reference [stored] item data for identifying said item based on a threshold match score); and identify each corresponding item positioned at the POS system when each fused set of UPC features from each … of UPC features of each UPC when combined match a corresponding set of UPC features in the complete set of UPC features of each UPC as stored in the item parameter identification database (See Gao ¶ [0049-0059] – an item is identified when a match score between captured barcode and text data is above a certain threshold when compared to stored reference barcode and text data, wherein said captured barcode and text data is stitched together from one or more images, thereby showing a fused set of UPC features by example). While Gao teaches using image feature extraction of one or more images of items at a checkout system by one or more cameras and stitching said one or more images together to compare image data of a scanned item with stored reference image data to identify said scanned item for a checkout transaction, (Gao ¶ [0033], [0043], [0049-0059]), Gao does not explicitly teach fusing each different set of UPC features from each different image clip. This is taught by Habib (See ¶ [0037-0039] – an image concatenation module identifying five different images and the features therein corresponding to five different cameras and stitching each of the five images into a single image, wherein said concatenated single image is a composite of the five images and the respective features therein, thereby increasing the set of features by example, which is illustrated in Figs. 5, 6A and 6B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the image feature extraction system of Gao the use of image concatenation to form a single image from multiple different images as taught by Habib to improve the object recognition accuracy in circumstances where items are fully or partially obstructed from view by one or more cameras (Habib ¶ [0022]). Response to Arguments Applicant's arguments filed 07/21/2025 have been fully considered but they are not persuasive. Rejection under 35 U.S.C. § 101: The amendments to independent claims 1 and 11 improve patent eligibility of the claimed invention of the instant application and the previous rejection under 35 U.S.C. § 101 in withdrawn. Said independent claims are tied to specific hardware (processors, cameras, memory) and performs a concrete technical function (automatically identifying physical items at checkout). The technical improvement is in the functioning of the checkout system (automatic identification, reduced manual intervention, improved accuracy) by integrating cameras, computing devices, and machine learning to extract item features from images and match them with a database, enabling automatic identification even when barcodes are incomplete. The claims appear to be directed to patent-eligible subject matter under §101 because they are integrated into a practical application with specific hardware and technical steps while solving a technical problem in the field of retail checkout. Rejection under 35 U.S.C. § 102: The amendments to independent claims 1 and 11 overcome the Gao prior art reference and the previous rejection under 35 U.S.C. § 102 is withdrawn. Any arguments solely against Gao are herein rendered moot. However, the invention of the instant application remains unpatentable because it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate features from Coelho Pinto and Habib in the invention of Gao as described above in the current rejection under 35 U.S.C. § 103. Rejection under 35 U.S.C. § 103: No new arguments are made for dependent claims 8-10 and 18-20 other than those previously stated for their respective independent base claims 1 and 11. The previous rejection under 35 U.S.C. § 103 is maintained as it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate features from Coelho Pinto and Habib in the invention of Gao as described above in the current rejection of claims 8-10 and 18-20 under 35 U.S.C. § 103. The applicant is generally reminded that prior art must be considered in its entirety (MPEP 2141.02 (VI)). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW S WERONSKI whose telephone number is (571)272-5802. The examiner can normally be reached M-F 8 am - 5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fahd A. Obeid can be reached at 5712703324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW S WERONSKI/ Examiner, Art Unit 3627 /FAHD A OBEID/ Supervisory Patent Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Oct 04, 2024
Application Filed
Jan 14, 2025
Non-Final Rejection — §103
Jul 21, 2025
Response Filed
Jan 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443938
Point-of-Sale (POS) Operation System
2y 5m to grant Granted Oct 14, 2025
Patent 12400247
REPRESENTING SETS OF ENTITITES FOR MATCHING PROBLEMS
2y 5m to grant Granted Aug 26, 2025
Patent 12367454
METHOD AND SYSTEM FOR VEHICLE MANAGEMENT
2y 5m to grant Granted Jul 22, 2025
Patent 12333614
QUALITY, AVAILABILITY AND AI MODEL PREDICTIONS
2y 5m to grant Granted Jun 17, 2025
Patent 12327393
SYSTEM AND METHOD FOR CAPTURING CONSISTENT STANDARDIZED PHOTOGRAPHS AND USING PHOTOGRAPHS FOR CATEGORIZING PRODUCTS
2y 5m to grant Granted Jun 10, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
10%
Grant Probability
29%
With Interview (+19.8%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 115 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month