Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are now pending in the application under prosecution and have been examined.
The specification has not been checked to the extent necessary to determine the presence of all possible minor errors.
The specification should be amended to reflect the status of all related application, whether patented or abandoned. Therefore, applications noted by their serial number and/or attorney docket number should be updated with correct serial number and patent number if patented.
The first instance of all acronyms or abbreviation should be spelled out for clarity, whether or not considered well known in the art.
In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application.
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
37 C.F.R. § 1.83(a) requires the Drawings to illustrate or show all claimed features.
Applicant must clearly point out the patentable novelty that they think the claims present, in view of the state of the art disclosed by the references cited or the objections made, and must also explain how the amendments avoid the references or objections. See 37 C.F.R. § 1.111(c).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 20200042491 (LICHT et al).
The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement.
..
With respect to claim 1, LICHT teaches a method, comprising: receiving transaction information for a transaction, an item image for an item of the transaction, and a candidate item identifier for the item (item/consumer tracker interfaced to cameras/scanners for receiving item images for a transaction and a transaction manager for receiving item details/identifier by scanning the item from images taking of the transaction at the transaction terminal) [Par. 0015]; selecting a head machine learning model (MLM) from a plurality of head MLMs based at least on the transaction information; obtaining an item classification data from a root MLM based on the item image (machine-learning item detector grouping/classifying the item images as batches collected for a number of the checkout transactions) [Par. 0031]; obtaining a predicted item identifier for the item from the head MLM based on the candidate item identifier, the item classification data, and localized metadata for the head MLM (the item identifier training a machine-learning algorithm on the item image to recognize the item from subsequent images taken of the item based on the item identifier, the expected output being the item identifier identifying features, factors, weights taking the item image as input and produces the item identifier as output) [Par. 0031]; and receiving an actual item identifier for the item and updating the actual item identifier and the item classification data in the localized metadata of the head MLM (the images for each item and its actual identification (item description) provided by the item/consumer tracker to the machine-learning item detector at the end of each transaction or during each transaction with the machine-learning item detector maintaining the item images and item descriptions) [Par. 0022-0025].
With respect to claim 2, LICHT teaches a method comprising: maintaining the localized metadata with statistics relevant to sales of a store, department of a store, and customers of the store (images taken where the item/consumer tracker flags each image with a unique identifier, such that a single item can be flagged and associated with multiple images, metadata associated with those images are provided by the image/consumer tracker to the machine-learning item detector) [Par. 0031].
With respect to claim 3, LICHT teaches a method further comprising: maintaining and updating the localized metadata without any manual intervention (self-Service terminal for self-checkout where the transaction terminal) [Par. 0023-0024].
With respect to claim 4, LICHT teaches a method, wherein receiving the transaction information further includes receiving at least the transaction information and the candidate item identifier from a transaction terminal that is performing the transaction (item/consumer tracker uniquely keeping track of images representing a particular unique store and on images of items possessed by the consumer within the store) [Par. 0019].
With respect to claim 5, LICHT teaches a method, wherein selecting further includes identifying a retailer, a store of the retailer, and a geographic region for the store based on a terminal identifier provided in the transaction information (machine-learning item detector using machine learning approaches to identify factors and features from the training set of images that classify the items such that the machine-learning item detector can predict and identify the item classification from provided images) [Par. 0015-0017].
With respect to claim 6, LICHT teaches a method, wherein selecting further includes traversing a hierarchy linked to the plurality of head MLMs using the transaction information, the retailer, the store, and the geographic region to identify the head MLM (the item to be trained on multiple store locations for multiple stores, with each store including its own local item/consumer tracker interfacing the machine-learning item detector) [Par. 0027; Par. 0030].
With respect to claim 7, LICHT teaches a method, wherein selecting further includes selecting the head MLM based on a customer identifier for a customer provided in the transaction information (the item identifier obtaining an image of an item with an item identifier, the item identifier receiving images for the item that the item identifier having been uniquely identified) [Par. 0037-0038].
With respect to claim 8, LICHT teaches a method, wherein selecting further includes selecting the head MLM based on a store identifier for a store provided in the transaction information (item identifier obtaining images of the item with the item identifier that were captured for the item from when the item was picked by a consumer from a store shelf and carried through the store to a checkout station where the checkout is processed.) [Par. 0038-0039].
With respect to claim 9, LICHT teaches a method, wherein obtaining the predicted item identifier further includes providing the item classification data as a seed item classification vector produced by the root MLM based on the item image (images of items captured as consumers located within the store with the item/consumer tracker uniquely keeps track of images representing a particular class) [Par. 0019-0020].
With respect to claim 1, LICHT teaches a method, wherein obtaining the predicted item identifier further includes providing the predicted item identifier as a price lookup (PLU) code to a transaction terminal that is performing the transaction, wherein the item is a produce item (item/consumer tracker keeping track of images representing a particular, the transaction terminal for item checkout, the item/consumer tracker identifying the transaction when the consumer scans the identified image of the item for payment, the item code and/or item description resolved by the transaction manager) [Par. 0020-0021].
Allowable Subject Matter
Claims 11-20 are allowed.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 12182845 B2 (BANERJEE et al) teaching computing device configured to determine a
composite similarity value for each item identifier of the set of item identifiers comparing a similarity of the asset to each item identifier of the set of item identifiers to generate an item recommendation list including each item identifier of the set of item identifiers with a corresponding composite similarity value above a threshold value and transmit the item recommendation list to the analyst device for display.
US 20240331005 A1 (SAHA et al) teaching method includes: capturing, via a sensor of a mobile
computing device, a first item identifier; determining, at the mobile computing device, whether the first item identifier matches one of a plurality of primary item identifiers; in response to determining that the first item identifier matches one of the primary item identifiers, providing input data to a classification model, the input data including the first item identifier; obtaining, via execution of the classification model, an item set identifier corresponding to an item set containing the first item identifier; and presenting the item set identifier at an output of the mobile computing device.
US 20240220822 A1 (WELLMANN et al) teaching systems and methods for predicting item
group composition, a system for predicting item group composition including a memory storing instructions and at least one processor configured to execute instructions to perform operations including: receiving entity identification information and a timestamp associated with a transaction without receiving information distinguishing items associated with the transaction; determining, based on the entity identification information, a localized machine learning model trained to predict categories of items based on transaction information applying to all of the items associated with the transaction; and applying the localized machine learning model to a model input to generate predicted categories of items associated with the transaction, the model input including the received entity identification information and a timestamp but not including information distinguishing items associated with the transaction.
US 20230351147 A1 (YANG) teaching a recommendation method, the method including:
determining a target user identifier and a plurality of candidate item identifiers; acquiring the intermediate user feature vector corresponding to the target user identifier by searching for the target user identifier; acquiring the intermediate item feature vectors corresponding to the plurality of candidate item identifiers by searching for the plurality of candidate item identifiers; outputting degrees of matching of the plurality of candidate item identifiers and the target user identifier by predicting the intermediate user feature vector and the intermediate item feature vectors in the online prediction model; and determining a target item identifier to be recommended to the target user identifier from the plurality of candidate item identifiers.
WO 2022226270 A2 (KWOK et al) teaching an entity classification model engine including an
entity classification model utilized to predict at least one entity -type classification classifying at least one first entity of the entities as a first entity type, a first plurality of entity-related activity characteristics representing an activity pattern associated with the at least one first entity and extracted from the activity data, an entity rating model engine comprising an entity rating model utilized to predict at least one entity rating prediction for the at least one entity based at least in part on the activity pattern, and an entity rating interface is generated comprising the at least one entity rating prediction interface element.
US 20230252443 A1 (McDANIEL et al) teaching first Machine-Learning Model (MLM) processed
on multiple images of a scene, each scene comprising a different perspective view of each of a plurality of items, the first MLM producing masks for the items within each image, each mask representing a portion of a given item within a given image, each item’s composite item image passed to a composite item image, the item codes for each item passed to a transaction manager to process a transaction.
US 11257004 B2 (LICHT et al) teaching tracking images of an unknown item picked from a store,
unknown item is identified during checkout and associated with a specific item having a specific item description; the images and the specific item description are obtained by a machine-learning item detector and processed during a machine-learning training session to subsequently identify the item when subsequent item images are taken for the item for subsequent transactions at the store.
US 20240386245 A1 (MAKOWSKI et al) teaching systems and methods generating synthetic
training data for artificial intelligence-based event detection and/or risk monitoring, the system include one or more processors configured to: receive a prompt identifying one or more characteristics of a training data image; generate, using at least one machine learning model, the training data image based on the one or more characteristics; provide the training data image as input to the at least one machine learning model to configure the at least one machine learning model using the training data image.
US 20190236531 A1 (ADATO et al) teaching method to include accessing at least one
planogram describing a desired placement of a plurality of product types on shelves of a plurality of retail stores; receiving image data from the plurality of retail stores; analyzing the image data to determine an actual placement of the plurality of product types on the shelves of the plurality of retail stores; determining at least one characteristic of planogram compliance based on detected differences between the at least one planogram and the actual placement of the plurality of product types on the shelves of the plurality of retail stores; and receiving checkout data from the plurality of retail stores reflecting sales of at least one product type from the plurality of product types.
Mehmed Kantardzic, "Association Rules," in Data Mining: Concepts, Models, Methods, and Algorithms , IEEE, 2003, pp.165-193, ch8.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PIERRE MICHEL BATAILLE whose telephone number is (571)272-4178. The examiner can normally be reached Monday - Thursday 7-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TIM VO can be reached at (571) 272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
PIERRE MICHEL BATAILLE
Primary Examiner
Art Unit 2138