DETAILED ACTION
Amendment received on December 28, 2025 has been acknowledged. Claims 2 and 7 have been cancelled and amendments to claims 1 and 6 have been entered. Therefore, claims 1, 3-6, and 8-10 are pending.
Response to Arguments
Applicant's arguments filed December 28, 2025 have been fully considered but they are not persuasive.
Applicant argues: “To the extent the Examiner may now rely upon Ron in a new rejection of the amended claims, Applicant submits that it would not have been obvious to a person of ordinary skill in the art to combine Ron with Moshkovitz and Doke. Ron is directed to resolving events (e.g., item additions or removals) in item-identifying carts, such as those used in fully automated, cashierless stores (e.g., similar to Amazon Go technology), where the focus is on event resolution across an entire store system rather than low-power, self-checkout receptacles with targeted processing of dynamic content for power efficiency.
Examiner respectfully disagrees. Moshkovitz teaches a smart shopping receptacle capable of utilizing imaging modules to construct a grid in space to detect any change within the grid. This allows for product insertion into the shopping receptacle.
Doke is directed to conserving power in item-identifying carts, that automatically identify items that a user places into a basket of a cart. Doke utilizes image processing techniques to identify items added or removed from the cart and update a virtual shopping list.
Doke, Col.8, lines 40-50 further teaches the activity-detection component may analyze the prior ten or other number of frames to determine whether a threshold number (e.g., eight frames) have been determined to represent the predefined activity. If so, then the activity-detection component may store an indication that the cart is currently experiencing the predefined activity, meaning that a user likely reaching into the basket area or other storage location of the cart, potentially to add an item to the cart or remove an item from the cart.
Doke Col.8, lines 60-65 further states: during times of no activity the item-identification component may be turned off or may operate at a reduced frame rate, thus conserving battery power consumed by the item-identification component.
In fact, Doke states that the item-identification component may analyze each frame of image data using an item- or barcode- localization component, that may identify a region on an individual’s frame of data that includes an item or a barcode. Col.9, lines 3-6
The cited portion of Doke provides a system and method capable of monitoring for dynamic content within a shopping receptacle using cameras and conserving power when identifying an event such as adding or removing an item from the basket of shopping receptacle.
The Moshkovitz-Doke combination disclose a smart shopping receptacle capable of forming a field of view using sensors or cameras and identifying items added or removed from the shopping receptacle.
Ron et al. is combined with the Moshkovitz-Doke combination to explicitly teach the quantity of items involved in the adding or removing of items from the basket of a shopping receptacle. Ron, Doke and Moshkovitz all teach a shopping receptacle with imaging devices such as cameras capable of creating a field of view. Each reference provides a teaching of the limitations as disclosed, and to include the teaches of Ron to the Moshkovitz-Doke combination would require additional software which one of ordinary skill within the art to could integrate and arrive at the claimed invention. Therefore Moshkovitz, Doke and Ron remain as prior art for amended claims 1 and 6.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3-6 and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Moshkovitz et al., WIPO WO 222/144888 in view of Doke et al., U.S. Patent #12,394,194 further in view of Ron et al., #11,620,822.
Asper Claim 1, Moshkovitz et al. discloses a low-power, self-checkout shopping receptacle (Figure 1, Shopping Cart 1050) comprising:
one or more image sensors arranged on the shopping receptacle (Figure 4A, cameras 403i and 404j); and
computer hardware connected with the one or more image sensors (pg.11, ¶ [0044] discusses the digital camera can comprise an image capturing unit or module, a capture controlling module, a processing unit (which can be the same or separate from the central processing module, pg.20, ¶ [0062] discusses the term module refers to software, hardware, for example, a processor, or a combination thereof that is programmed with instructions for carrying an algorithm or method), and configured to perform operations comprising:
demarcate an area of interest associated with the shopping receptacle within a field of view (FOV) of the one or more image sensors (pg.1, ¶ [0023] discusses the system defines a Virtual Recognition Grid (VRG) by creating a virtual area in 3D space, and by monitoring and recognizing objects traversing e.g., “breach…pg.2, ¶ [0027] discusses Camera's FOV is another aspect for forming a VRG);
detect one or more features of one or more products that are imaged by the one or more image sensors in the area of interest associated with the shopping receptacle (pg.4, ¶ [0035] discusses Images derived by the processors from objects' motion through and or within the VRG frame, would contain information about the objects intersecting the VRG, rather than the entire 3D volume as captured by the VRG sensors and/or cameras); and
associate the detected features of the one or more products with a product database to determine a product identification for each of the one or more products that are imaged by the one or more image sensors (pg.19, ¶ [0061] discusses detection of product 500 trajectory through e.g., layer 7033, the stable bottom, will for example trigger the system to communicate with the product database and initiate product classification of an incoming product 500…detection of product 500 trajectory through e.g., layer 7031, the insertion VRG layer, can for example, trigger the classification of the product, update the list of items and so on);
identify dynamic content within the area of interest associated with the shopping receptacle (pg.1, ¶ [0024] discusses the VRG sensors are cameras, such as RGB cameras, utilized for capturing an image of the object traversing through the VRG construct),
the dynamic content related to image content from the one or more image sensors that changes over time (pg.4, ¶ [0024] discusses Capturing of moving object with sufficient image quality needed to visualize such small details, i.e., capturing sharp images of objects during their motion, may require low exposure times); and
process only the dynamic content to determine one or more features of product that is imaged by the one or more sensors (pg.22, ¶ [0065] discusses detecting motion of the object through and/or within the 3D virtual construct, and detect the object/s type).
Moshkovitz teaches a system and method capable of identifying items placed within a shopping cart.
However, Moshkovitz fails to explicitly state detected features of the one or more products with a product database to determine a product identification for each of the one or more products that are imaged by the one or more image sensors.
Doke et al. teaches detected features of the one or more products with a product database to determine a product identification for each of the one or more products that are imaged by the one or more image sensors (Col.36, lines 50-54 discusses the item-identification component 1338 may determine an item identifier 1370 for the item in the image data 1356 ( or multiple item identifiers 1370) that corresponds to an item in the item data 1358 to which the item corresponds).
Therefore it would have been obvious to one of ordinary skill in the art of item-identifying carts before the effective filing date of the claimed invention to modify the system of Moshkovitz et al., to include the ability to identify multiple products entering a shopping cart as taught by Doke et al., to provide an item-identifying, mobile cart that may be utilized by a user in a materials handling facility to automatically identify items that the user places into a basket of the cart. Abstract
Moshkovitz et al., including incorporated references and Doke et al. form the Moshkovitz-Doke combination and teach the limitations of the claimed invention.
Doke et al. Col.9, lines 43-49 teaches by turning off or decreasing the rate at which the item-identification component operates during time windows of no activity, battery power of the cart may be conserved without sacrificing the accuracy of the item-identification component.
Col.13, lines 66-67 discusses the cart may begin analyzing the image data 126 to attempt to identify an item placed into or removed from the basket or other storage location of the cart 104, or may increase a frame rate associate with this analysis….lines 33-38 discusses the activity-detection component 118 may generate feature data for individual frames of the image data 126 may input this feature data into the model 120, which may be trained to output respective labels 128 indicating whether each individual frame represents the predefined activity.
The cited portions of Doke teaches a shopping cart having the capability of entering a low power state, detect activity (i.e. adding or removing an item) and extracting feature data from images.
However, the Moshkovitz-Doke combination fails to explicitly state process only the dynamic content to determine one or more features of an additional product that is imaged by the one or more sensors.
Ron et al. teaches process only the dynamic content to determine one or more features of an additional product that is imaged by the one or more sensors (Col.9, lines 44-51 discusses the cart may attempt to determine a quantity of the items involved, such as whether one, two, or any other number of instances was placed into the cart or removed from the cart… in some instances, the outcome of a particular event may involve multiple identified items and quantities. For instance, after identifying a first item and a second item, the cart may determine that two instances of the first item were added to the cart).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have the ability to identify a second item placed within a shopping cart as in the improvement discussed in Ron et al., in the system executing the method of the Moshkovitz-Doke combination. As in Ron et al., it is within the capabilities of one of ordinary skill in the art to determine whether a second item has been added to a shopping cart to the Moshkovitz-Doke combination with the predicted result of identifying an additional item as needed in the Moshkovitz-Doke combination Asper Claim 3, Moshkovitz et al. discloses the low-power, self-checkout shopping receptacle in accordance with claim 1, wherein the operations further comprise:
recognize, using computational logic and the product database, the product identification for each of the one or more products in the shopping receptacle (pg.9, ¶ [0040] discusses capturing textual information located on a retail-product wrap from two or more sides, by OCR algorithms, leads to improved matching of the textual sequences on that product to a database of textual words from all products in a given store); and
display the product identification for each of the one or more products in an electronic display (pg.18, ¶ [0055] discusses the user interface module is capable of displaying any data that it reads from the imaging/sensing module).
As per Claim 4, Moshkovitz et al. discloses the low-power, self-checkout shopping receptacle in accordance with claim 1, wherein the operations further comprise, wherein the computer hardware comprises:
a programmable processor; and a non-transitory machine-readable medium storing instructions that, when executed by the processor, cause the at least one programmable processor to perform at least some of the operations (pg.25, ¶ [0071] discusses a non-transitory memory storage device storing thereon a computer readable medium (CRM) for recognizing an object motion through a three-dimensional (3D) virtual construct, the CRM comprising a set of executable instructions configured to, when executed by at least one processor, cause the at least one processor to perform the steps of: using a panel's sensor in communication with the article of manufacture, detecting motion of the object through and/or within the 3D virtual construct).
Asper Claim 5, Moshkovitz et al., discloses the low-power, self-checkout shopping receptacle in accordance with claim 1. However, Moshkovitz et al. fails to disclose wherein the one or more features includes a barcode.
Doke et al. teaches wherein the one or more features includes a barcode (Col.9, lines 3-6 discusses the item-identification component may analyze each frame of image data using an item- or barcode localization component, that may identify a region of an individual frame of data that includes an item or a barcode).
Therefore it would have been obvious to one of ordinary skill in the art of item-identifying carts before the effective filing date of the claimed invention to modify the system of Moshkovitz et al., to include the ability to identify multiple products entering a shopping cart by reading bar codes as taught by Doke et al., to provide an item-identifying, mobile cart that may be utilized by a user in a materials handling facility to automatically identify items that the user places into a basket of the cart. Abstract
As per Claim 6, Moshkovitz et al. discloses a system comprising:
a shopping receptacle configured to be moved within a shopping environment (Figure 1, Shopping Cart 1050);
one or more image sensors arranged on the shopping receptacle (Figure 4A, cameras 403i and 404j); and
computer hardware connected with the one or more image sensors (pg.11, ¶ [0044] discusses the digital camera can comprise an image capturing unit or module, a capture controlling module, a processing unit (which can be the same or separate from the central processing module, pg.20, ¶ [0062] discusses the term module refers to software, hardware, for example, a processor, or a combination thereof that is programmed with instructions for carrying an algorithm or method), and
configured to perform operations comprising:
demarcate an area of interest associated with the shopping receptacle within a field of view (FOV) of the one or more image sensors and within the shopping environment (pg.1, ¶ [0023] discusses the system defines a Virtual Recognition Grid (VRG) by creating a virtual area in 3D space, and by monitoring and recognizing objects traversing e.g., “breach…pg.2, ¶ [0027] discusses Camera's FOV is another aspect for forming a VRG...Incorporated reference 17/26,839 pg.9, ¶ [0087] discusses a plurality of imaging modules coupled to the cart, adapted to, at least one of image an item inserted into the cart, and image an area of interest outside the cart);
detect features of one or more products that are imaged by the one or more image sensors in the area of interest associated with the shopping receptacle (pg.4, ¶ [0035] discusses Images derived by the processors from objects' motion through and or within the VRG frame, would contain information about the objects intersecting the VRG, rather than the entire 3D volume as captured by the VRG sensors and/or cameras); and
associate the detected features of the one or more products with a product database to determine a product identification for each of the one or more products that are imaged by the one or more image sensors (pg.19, ¶ [0061] discusses detection of product 500 trajectory through e.g., layer 7033, the stable bottom, will for example trigger the system to communicate with the product database and initiate product classification of an incoming product 500…detection of product 500 trajectory through e.g., layer 7031, the insertion VRG layer, can for example, trigger the classification of the product, update the list of items and so on) ;
identify dynamic content within the area of interest associated with the shopping receptacle (pg.1, ¶ [0024] discusses the VRG sensors are cameras, such as RGB cameras, utilized for capturing an image of the object traversing through the VRG construct),
the dynamic content related to image content from the one or more image sensors that changes over time (pg.4, ¶ [0024] discusses Capturing of moving object with sufficient image quality needed to visualize such small details, i.e., capturing sharp images of objects during their motion, may require low exposure times); and
process only the dynamic content to determine one or more features of product that is imaged by the one or more sensors (pg.22, ¶ [0065] discusses detecting motion of the object through and/or within the 3D virtual construct, and detect the object/s type).
Moshkovitz teaches a system and method capable of identifying items placed within a shopping cart.
However, Moshkovitz fails to explicitly state detected features of the one or more products with a product database to determine a product identification for each of the one or more products that are imaged by the one or more image sensors.
Doke et al. teaches detected features of the one or more products with a product database to determine a product identification for each of the one or more products that are imaged by the one or more image sensors (Col.36, lines 50-54 discusses the item-identification component 1338 may determine an item identifier 1370 for the item in the image data 1356 ( or multiple item identifiers 1370) that corresponds to an item in the item data 1358 to which the item corresponds).
Therefore it would have been obvious to one of ordinary skill in the art of item-identifying carts before the effective filing date of the claimed invention to modify the system of Moshkovitz et al., to include the ability to identify multiple products entering a shopping cart as taught by Doke et al., to provide an item-identifying, mobile cart that may be utilized by a user in a materials handling facility to automatically identify items that the user places into a basket of the cart. Abstract
Moshkovitz et al., including incorporated references and Doke et al. form the Moshkovitz-Doke combination and teach the limitations of the claimed invention.
Doke et al. Col.9, lines 43-49 teaches by turning off or decreasing the rate at which the item-identification component operates during time windows of no activity, battery power of the cart may be conserved without sacrificing the accuracy of the item-identification component.
Col.13, lines 66-67 discusses the cart may begin analyzing the image data 126 to attempt to identify an item placed into or removed from the basket or other storage location of the cart 104, or may increase a frame rate associate with this analysis….lines 33-38 discusses the activity-detection component 118 may generate feature data for individual frames of the image data 126 may input this feature data into the model 120, which may be trained to output respective labels 128 indicating whether each individual frame represents the predefined activity.
The cited portions of Doke teaches a shopping cart having the capability of entering a low power state, detect activity (i.e. adding or removing an item) and extracting feature data from images.
However, the Moshkovitz-Doke combination fails to explicitly state process only the dynamic content to determine one or more features of an additional product that is imaged by the one or more sensors.
Ron et al. teaches process only the dynamic content to determine one or more features of an additional product that is imaged by the one or more sensors (Col.9, lines 44-51 discusses the cart may attempt to determine a quantity of the items involved, such as whether one, two, or any other number of instances was placed into the cart or removed from the cart… in some instances, the outcome of a particular event may involve multiple identified items and quantities. For instance, after identifying a first item and a second item, the cart may determine that two instances of the first item were added to the cart).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have the ability to identify a second item placed within a shopping cart as in the improvement discussed in Ron et al., in the system executing the method of the Moshkovitz-Doke combination. As in Ron et al., it is within the capabilities of one of ordinary skill in the art to determine whether a second item has been added to a shopping cart to the Moshkovitz-Doke combination with the predicted result of identifying an additional item as needed in the Moshkovitz-Doke combination
Asper Claim 8, Moshkovitz et al. discloses the system in accordance with claim 6, wherein the operations further comprise:
recognize, using computational logic and the product database (pg.9, ¶ [0040] discusses capturing textual information located on a retail-product wrap from two or more sides, by OCR algorithms, leads to improved matching of the textual sequences on that product to a database of textual words from all products in a given store); and
display the product identification for each of the one or more products in an electronic display (pg.18, ¶ [0055] discusses the user interface module is capable of displaying any data that it reads from the imaging/sensing module).
Moshkovitz teaches identifying objects/products entering and exiting a shopping cart throughout. However, Moshkovitz fails to explicitly state the product identification for each of the one or more products in the shopping receptacle.
Doke et al. teaches the product identification for each of the one or more products in the shopping receptacle (Col.37, lines 26-37 discusses the cart management system 1330 may also include a virtual-cart management component 1342 configured to manage virtual shopping cart data 1368 for the cart 1300. For instance, the virtual-cart management component 1342 may utilize the item data 1358, event-description data 1360, and confidence level data 1362 to add item identifier(s) 1370 to the virtual shopping cart data 1368 for items 806 that were added to the cart 1300, remove item identifier(s) 1370 from the virtual shopping cart data 569 for items 806 that were removed from the cart 1300, and track item quantity data 1372 indicating quantities of particular items 806 in the cart).
Therefore it would have been obvious to one of ordinary skill in the art of item-identifying carts before the effective filing date of the claimed invention to modify the system of Moshkovitz et al., to include the ability to identify products within a shopping cart as taught by Doke et al., to provide an item-identifying, mobile cart that may be utilized by a user in a materials handling facility to automatically identify items that the user places into a basket of the cart. Abstract
Asper Claim 9, Moshkovitz discloses the system in accordance with claim 6, wherein the one or more image sensors includes a barcode reader (Incorporated Reference 17/267,843 pg.3, ¶ [0032] discusses a sensor array 107 includes a barcode reader).
Asper Claim 10, Moshkovitz discloses the system in accordance with claim 6. However, Moshkovitz fails to disclose wherein the one or more image sensors includes a QR code reader.
Doke et al. teaches wherein the one or more image sensors includes a QR code reader (Col.23, lines 57-60 discusses the carts 804 may include a first imaging device 834(1) (e.g., an image sensor such as a camera, photodetector, or other sensing apparatus designed to read a one or two-dimensional barcode).
Therefore it would have been obvious to one of ordinary skill in the art of item-identifying carts before the effective filing date of the claimed invention to modify the system of Moshkovitz et al., to include the ability to utilize a two dimensional barcode reader to identify multiple products entering a shopping cart as taught by Doke et al., to provide an item-identifying, mobile cart that may be utilized by a user in a materials handling facility to automatically identify items that the user places into a basket of the cart. Abstract
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHFORD S HAYLES whose telephone number is (571)270-5106. The examiner can normally be reached M-F 6AM-4PM with Flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fahd Obeid can be reached at 5712703324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASHFORD S HAYLES/Primary Examiner, Art Unit 3627