Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 6 is objected to because of the following informalities. Claim 6 currently states: “The method of claim 1, wherein determining, based on the image data, whether the object transportation apparatus is located within the target location further comprises: determining whether a detection signal from a second device corresponds to the object transportation apparatus being located within the target location” . However, there is no previous mention specifically of a device or a first device in Claim 6 or Claim 1. While it could be assumed the second device is referring to some device not located within or as part of the imaging assembly described in Claim 1, the current language could cause concern for containing insufficient antecedent basis or a misinterpretation of the claims. Perhaps language describing this as a separate device or similar would remedy these concerns. Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 4-8, 14, 16, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Musiani (US Publication No. 20240242503 A1).
Regarding Claim 1, Musiani discloses A method for object locationing to initiate an identification session (Reference “Phase 1” and “start the items scanning process, see Specification paragraph 0077 where a customer can check in to initiate a tag scanning or identification session of items), comprising: capturing, by a first imager of an imaging assembly, an image including image data of a target location at a checkout station (Reference “overhead camera”, see Specification paragraph 0011 where an overhead camera is used to guide customers to the checkout station); analyzing the image data to identify an object transportation apparatus positioned proximate to the target location (Reference “cart”, see Specification paragraph 0011 where the overhead camera can check the customer places the cart which is the transportation apparatus in the correct checkout target location); determining, based on the image data, whether the object transportation apparatus is located within the target location (Reference “cart”, see Specification paragraph 0011 where the overhead camera can check the customer places the cart which is the transportation apparatus in the correct checkout target location. Also note specification paragraph 0017 where these steps are more clearly divided); and responsive to determining that the object transportation apparatus is located within the target location, initiating, by a second imager of the imaging assembly (Reference “scanner”, see paragraph 0043 and Figures 22A-22C showing images collected by this scanner which is discussed separately from the previously mentioned overhead camera. Note the figures show images of barcodes or indicia being scanned), an identification session to verify that each object in the object transportation apparatus is included on a list of decoded indicia (Reference “identify items within the basket”, see Specification paragraph 0075 describing the capture of the shopping basket by the overhead camera and the supervision taken. Further note paragraph 0106 where this supervision process is further described which checks the items scanned were not for example the barcode or one of the scanned indicia does not match an item scanned. These scanned barcodes are also specifically referred to as a list as noted in paragraph 0078).
Regarding Claim 4, Musiani discloses The method of claim 1, wherein: analyzing the image data further comprises: identifying a floor marking that delineates the target location on a floor of the checkout station (Reference “virtual checkout lane”, see Specification paragraph 0074 where the system checks the cart is within a virtual checkout lane); and determining whether the object transportation apparatus is located within the target location further comprises: determining whether the object transportation apparatus is located within the floor marking on the floor of the checkout station (Reference “virtual checkout lane”, see Specification paragraph 0074 where this is done with markings are projected onto the floor 516).
Regarding Claim 5, Musiani discloses The method of claim 4, wherein the floor marking is a pattern projected onto the floor of the checkout station by one or more of: (a) an overhead lighting device, (b) a cradle lighting device, or (c) a lighting device mounted at a point of sale (POS) station (Reference “smart projector”, see Specification paragraph 0051 where the smart projector which can project the virtual checkout lane or orientation lines of the cart is described as being part of the support structure attached to the checkout or POS station).
Regarding Claim 6, Musiani discloses The method of claim 1, wherein determining, based on the image data, whether the object transportation apparatus is located within the target location further comprises: determining whether a detection signal from a second device corresponds to the object transportation apparatus being located within the target location, wherein the second device is (a) a metal detector, (b) a radio frequency identification (RFID) detector, (c) a Near Field Communications (NFC) beacon, or (d) a Bluetooth® Low Energy (BLE) beacon (Reference “NFC” see Specification paragraph 0072 where the invention is capable of reading NFC tags. Cursory mention of the RFID and other technologies as widely used dating to the 1990s is mentioned in paragraph 0006).
Regarding Claim 7, Musiani discloses The method of claim 1, further comprising: compiling, based on the image data, a list of object characteristics corresponding to one or more characteristics of each object within the object transportation apparatus (Reference “basket or cart analyzer”, see Specification paragraph 0069 where this analyzer identifies all items in the cart based on size and shape which are characteristics of each object); compiling, during the identification session, a list of decoded indicia including indicia of objects within the object transportation apparatus (Reference “decode”, see Specification paragraph 0072 where the barcodes are decoded which are indicia of the objects and can decode the NFC tags as mentioned previously. There is also item identifiers created by the cart analyzer see Specification paragraph 0069 which would also read as decoded indicia); detecting a termination of the identification session (Reference “Completing the self-checkout”, see Specification paragraph 0204 describing the entire state flow to the invention and the completion of the checkout by the customer); comparing the list of decoded indicia to the list of object characteristics (Reference “misbehavior”, see Specification paragraph 0204 where the customer is checked for misbehavior as part of the state flow of invention); and responsive to determining that (i) an indicia is not matched with one or more object characteristics or (ii) one or more object characteristics are not matched with an indicia, activating a mitigation (Reference “misbehavior” see Specification paragraph 0106 where the previously mentioned fraud or misbehavior is further explained and notes specifically that if a misbehavior occurs lights are raised which are a mitigation of theft, as noted in applicant’s specification paragraph 0009 where raising an alert is activating a mitigation. Further note Specification paragraph 0105 describing this fraudulent misbehavior with examples describing labels not matching with the objects).
Regarding Claim 8, Musiani discloses The method of claim 7, wherein the mitigation includes one or more of: (i) marking a receipt, (ii) triggering an alert, (iii) storing video data corresponding to the identification session, (iv) notifying a user, (v) a deactivation signal, (vi) an activation signal, (vii) transmitting an indicia to a point of sale (POS) host to include the indicia on the list of decoded indicia (Reference “misbehavior” see Specification paragraph 0106 where the previously mentioned fraud or misbehavior is further explained and notes specifically that if a misbehavior occurs lights are raised which read as an triggering an alert.
Regarding Claim 14, Musiani discloses the method of Claim 1, wherein the object transportation apparatus is a shopping cart (Reference “cart”, see Specification paragraph 0016 where the cart is being scanned by a 3D scanner to check all items are scanned in the cart), and the method further comprises: detecting, based on the image data, a first object under a basket portion of the shopping cart (Reference “basket” and “obscured”, see Specification paragraph 0016 where an item if obscured by an another object in the shopping cart is detected. Note in paragraph 0095 the scanning of the cart specifically related to the obscuring mesh of the cart basket); and determining, during the identification session, that a user has moved a scanning device sufficient to scan the first object, wherein the determining is based on one or more of: (i) an internal accelerometer signal, (ii) an elevation sensor signal, (iii) image data indicating that the scanning device is positioned to capture data of the first object, or (iv) signal data from a second device (Reference “3D scanner”, see Specification paragraph 0016 where the above detection of an object is done via 3D scanner which is position to view the entire cart. Also note in paragraph 0136, 0137 and 0163 where the specific instances of customer scanning positioning captured by the image data are discussed in greater detail including customer hand tracking to check scans).
Regarding Claim 16, which as a device embodiment to the method claim disclosed above in Claim 1, is also taught by Musiani (Reference “system and modules for monitoring and controlling a self-checkout process”, see Specification paragraph 0099 where processers and controllers are described in conjunction to the previously described components described in Claim 1 above).
Regarding Claim 20, which as a computer readable medium embodiment to the method claim disclosed above in Claim 1, is also taught by Musiani (Reference “non-transitory medium”, see Specification paragraph 0054 where processers and controllers are described in conjunction to the previously described components described in Claim 1 above).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 10, 15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Musiani (US Publication No. 20240242503 A1) in view of Whitelaw et al (US Publication No. 20250046161 A1).
Regarding Claim 2, Musiani discloses The method of claim 1, but fails to disclose wherein the imaging assembly comprises the second imager being disposed within a handheld scanning apparatus and the first imager being disposed within a base configured to receive the handheld scanning apparatus.
Instead, Whitelaw discloses wherein the imaging assembly comprises the second imager being disposed within a handheld scanning apparatus (Reference “handheld”, see Specification paragraph 0102 where a handheld scanner is provided for reading the barcodes) and the first imager being disposed within a base configured to receive the handheld scanning (Reference “camera system”, see Specification paragraph 0102 where this same base or terminal also has the camera system which receives the handheld apparatus). Motivation for such a modification is given by Whitelaw as well (See Specification paragraph 0047) where the handheld apparatus allows the customer to scan goods which are still in the basket. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Musiani in view of Whitelaw to allow the customer to scan goods in the cart.
Regarding Claim 10, Musiani discloses the method of Claim 1, but fails to disclose wherein the first imager is disposed within a base configured to receive a handheld scanning apparatus, the base being fixedly attached to a counter edge of the checkout station.
Instead, Whitelaw discloses wherein the first imager is disposed within a base configured to receive a handheld scanning apparatus, the base being fixedly attached to a counter edge of the checkout station (See figure 1 where the overhead support containing the cameras is contained and is attached to a counted edge, the rear edge. Note the cameras in Figure 2 camera 153 and 154). Motivation for such a modification is given by Whitelaw as well (See Specification paragraph 0047) where the handheld apparatus allows the customer to scan goods which are still in the basket. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Musiani in view of Whitelaw to allow the customer to scan goods in the cart.
Regarding Claim 15, Musiani discloses the method of Claim 1, but fails to disclose wherein the first imager is disposed within a handheld scanning apparatus, and the image is captured prior to decoupling the handheld scanning apparatus from a base.
Instead, Whitelaw discloses wherein the first imager is disposed within a handheld scanning apparatus, and the image is captured prior to decoupling the handheld scanning apparatus from a base. (Reference “handheld”, see Specification paragraph 0029 where the 3D scanner captures images of the scanning area and does so when the handheld scanner is not in use which as noted in paragraph 0079 is specifically noted for rescanning of items. Also the other cameras noted in ). Motivation for such a modification is given by Whitelaw as well (See Specification paragraph 0047) where the handheld apparatus allows the customer to scan goods which are still in the basket. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Musiani in view of Whitelaw to allow the customer to scan goods in the cart.
Claim 17 is rejected for containing similar limitations to Claim 2 as disclosed above.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Musiani (US Publication No. 20240242503 A1) in view of Hewett et al (US Publication No. 20200096599 A11).
Regarding Claim 9, Musiani discloses The method of claim 1, but fails to disclose detecting, by an RFID detector during the identification session, an obscured object that is within the object transportation apparatus and is obscured from an FOV of the first imager; and obtaining, by the RFID detector, an object identifier for the obscured object
Instead, Hewett discloses detecting, by an RFID detector during the identification session, an obscured object that is within the object transportation apparatus and is obscured from an FOV of the first imager; and obtaining, by the RFID detector, an object identifier for the obscured object. (Reference “LOS and NLOS”, see Specification paragraph 0059 where line of sight and non-line of sight RFID tags are detected. Note a NLOS or non-line of sight detected tag would read as an obscured tag from a FOV. Further note these are applied to shopping carts, bags and cart). Motivation for this modification allows customer to skip scanning checkout and save time (See Specification paragraph 0132). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Musiani in view of Whitelaw.
Allowable Subject Matter
Claims 3, 11-13, 18, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding Claim 3, Musiani discloses the method of claim 1 as shown in paragraph 0204 which as referenced above describes a state flow of the invention involving “The different states of the checkout process may include at least two of the following: (i) waiting for a customer; (ii) a customer placing a shopping cart or basket with one or more items at the self-checkout area; (iii) scanning items using the scanner and placing the scanned items into the bagging area; (iv) enabling the shopper to pay for the scanned item; (v) notifying the customer to remove the items from the bagging area; (vi) completing the self-checkout without detecting an error or misbehavior; and (vii) detecting an error or misbehavior and providing visual feedback via the projector” but fails to teach wherein determining whether the object transportation apparatus is located within the target location further comprises: determining, based on the image data, whether (i) the object transportation apparatus is located within the target location and (ii) each object within the object transportation apparatus is fully contained within a field of view (FOV) of the first imager; and responsive to determining that either (i) the object transportation apparatus is not located within the target location or (ii) one or more objects within the object transportation apparatus are not fully contained within the FOV, displaying, on a user interface, an alert indicating a direction for a user to move the object transportation apparatus. Therefore, Claim 3 contains potentially allowable subject matter.
Regarding Claim 11, Musiani discloses The method of claim 1, as shown in paragraph 0204 which as referenced above describes a state flow of the invention involving “The different states of the checkout process may include at least two of the following: (i) waiting for a customer; (ii) a customer placing a shopping cart or basket with one or more items at the self-checkout area; (iii) scanning items using the scanner and placing the scanned items into the bagging area; (iv) enabling the shopper to pay for the scanned item; (v) notifying the customer to remove the items from the bagging area; (vi) completing the self-checkout without detecting an error or misbehavior; and (vii) detecting an error or misbehavior and providing visual feedback via the projector” but fails to teach wherein the first imager is a two-dimensional (2D) camera, the image data is 2D image data of the target location and the object transportation apparatus, and wherein determining whether the object transportation apparatus is located within the target location further comprises: determining that a front edge, a left edge, and a right edge of the target location are unobscured in the 2D image data; determining a first dimension of the object transportation apparatus based on a plurality of features on the object transportation apparatus; comparing the first dimension to a known dimension of the object transportation apparatus; and responsive to determining that (i) the first dimension is substantially similar to the known dimension and (ii) that the front edge, the left edge, and the right edge of the target location are unobscured, determining that the object transportation apparatus is located within the target location. Therefore, Claim 11 contains potentially allowable subject matter.
Regarding Claim 12, it is objected to as a dependent claim to 11 with no further rejections.
Regarding Claim 13, Musiani discloses The method of claim 1, as shown in paragraph 0204 which as referenced above describes a state flow of the invention involving “The different states of the checkout process may include at least two of the following: (i) waiting for a customer; (ii) a customer placing a shopping cart or basket with one or more items at the self-checkout area; (iii) scanning items using the scanner and placing the scanned items into the bagging area; (iv) enabling the shopper to pay for the scanned item; (v) notifying the customer to remove the items from the bagging area; (vi) completing the self-checkout without detecting an error or misbehavior; and (vii) detecting an error or misbehavior and providing visual feedback via the projector” but fails to teach The method of claim 1, wherein the first imager is a three-dimensional (3D) camera, the image data is 3D image data of the target location and the object transportation apparatus, and wherein determining whether the object transportation apparatus is located within the target location further comprises: determining that a front edge, a left edge, and a right edge of the target location are unobscured in the 3D image data; determining a distance of a proximate face of the object transportation apparatus from the imaging assembly based on depth information included as part of the 3D image data corresponding to the object transportation apparatus; comparing the distance of the proximate face of the object transportation apparatus from the imaging assembly to a known distance of a proximate edge of the target location; and responsive to determining that (i) the distance of the proximate face is substantially similar to the known distance of the proximate edge and (ii) that the front edge, the left edge, and the right edge of the target location are unobscured, determining that the object transportation apparatus is located within the target location.
Regarding Claim 18, it is also objected to as it contains similar limitations to Claim 3 described above.
Regarding Claim 19, it is also objected to as it contains similar limitations to Claim 13 described above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER JOHN RODGERS whose telephone number is (703)756-1993. The examiner can normally be reached 5:30AM to 2:30PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER JOHN RODGERS/Examiner, Art Unit 2661
/JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661