Prosecution Insights
Last updated: April 19, 2026
Application No. 16/741,594

ENCODING TEXTUAL DATA FOR PERSONALIZED INVENTORY MANAGEMENT

Final Rejection §101§103§112
Filed
Jan 13, 2020
Examiner
RAMPHAL, LATASHA DEVI
Art Unit
3688
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Punchh Inc.
OA Round
6 (Final)
34%
Grant Probability
At Risk
7-8
OA Rounds
3y 11m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
65 granted / 193 resolved
-18.3% vs TC avg
Strong +49% interview lift
Without
With
+49.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
30 currently pending
Career history
223
Total Applications
across all art units

Statute-Specific Performance

§101
31.7%
-8.3% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 193 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This rejection is in response to Amendments filed on 09/17/2025. Claims 1-6 and 8-20 are currently pending and have been examined. Claim 7 is cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 09/17/2025 have been fully considered but they are not persuasive. With respect to applicant’s arguments on pages 15-17 of remarks filed 09/17/2025 that the claims are not directed to an abstract idea because it provides a technical solution to aggregating words in lists of items using a specialized machine-learning model to output scores, Examiner respectfully disagrees. The machine learning model is not considered as directed towards the abstract idea, however the machine learning model is considered as an additional limitation under Step 2A (Prong 2) of the Subject Matter Eligibility Test. Therefore, the machine learning model is not directed towards the abstract idea. Even when considering aggregating words in a list of items to output scores, solving the problem of inconsistent and redundant item descriptions that are difficult to aggregate, and minimizing the likelihood of a false prediction of the word appear to solve a commercial problem of aggregating the item descriptions to determine that various names refer to the same product. Therefore, the claims are directed to an abstract idea as certain methods of organizing human activities (e.g. item recommendations) and mathematical concepts (e.g. calculations). With respect to applicant’s arguments on pages 17-19 of remarks filed 09/17/2025 that the claims recite a practical application of the judicial exception because the claims are directed to outputting a recommended item having the highest score, Examiner respectfully disagrees. If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim, or identifies technical improvements realized by the claim over the prior art. After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology. See MPEP 2106.05(a). The courts have also identified limitations that did not integrate a judicial exception into a practical application: merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. See MPEP § 2106.05(f) and 2106.04(d). It is not apparent to a person of ordinary skill in the art how outputting item recommendations having the highest score improves technology. Item recommendations are not a problem rooted in technology but rather directed to a commercial problem to deal with output recommended items based on calculations such as the highest score. The claims are not directed towards a practical application because the computer and machine learning model are merely used as a tool to implement the abstract idea of recommending items. With respect to applicant’s arguments on pages 19-20 of remarks filed 09/17/2025 that the claims include meaningful limitations that amount to significantly more because the claimed invention recites calculations such as dot product and minimizing a mean square error, Examiner respectfully disagrees. Another consideration when determining whether a claim recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. See MPEP 2106.05(f). Implementing an abstract idea on a generic computer using machine learning does not integrate the abstract idea into a practical application or add significantly more in Step 2B. The recitation in the claim of a computer implementing machine learning amounts to mere instructions to apply the abstract idea over a network or on a computer. Performing calculations such a dot product and minimizing a mean square error are not considered as additional elements but are considered as a part of the abstract idea of mathematical concepts. Therefore, the claims do not recite significantly more because they recite mere instructions to apply the abstract idea of item recommendations. With respect to applicant’s arguments on pages 20-21 of remarks filed 09/17/2025 that the claims should be allowable over the prior art, Examiner respectfully disagrees. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Claim Objections Claim 1 is objected to because of the following informalities: generating a feature representation for each of the inventory items of based on… Appropriate correction is required. Claims 1, 15, and 18 is objected to because of the following informalities: a inventory item of the plurality of candidate items… Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 and 8-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 15, and 18 recite: inputting/provide the first list of items and the second list of items …., based on each of items in the first and second list, rendering said claims indefinite because it is unclear whether the first list of items and the second list of items is the same or different from the first and second list. Appropriate correction or clarification is required. Claims 1, 15, and 18 recite: determining, for each of the plurality of inventory items, an observed affinity score based on historical inventory order data; generating, for each of the plurality of inventory items, an optimized affinity score by minimizing a mean square error between the predicted affinity score and an observed affinity score, rendering said claims indefinite because it is unclear whether an observed affinity score is the same or different from an observed affinity score. Appropriate correction or clarification is required. Claims 9, 16, and 19 recite: a first source database… a second source database, rendering said claims indefinite because it is unclear whether a first source database… a second source database recited in independent claims 1, 15, and 18 are the same or different from a first source database… a second source database. Appropriate correction or clarification is required. Claim 11 recites: an inventory item, rendering said claim indefinite because it is unclear whether a inventory item recited in independent claim 1 is the same or different than an inventory item in claim 11. Appropriate correction or clarification is required. There is insufficient antecedent basis for the following limitations in: Claims 1 and 15 recite: an item of the list…; Claims 1, 15, and 18 recite: calculating, for each of the plurality of inventory items; determining a highest item score of the vector for that item; the vector item scores for that inventory item; a dot product of the feature representation of that inventory item; a predicted affinity score based on the dot product of that inventory item. Appropriate correction or clarification is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6 and 8-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more. Under step 1 of the Subject Matter Eligibility Analysis, it must be considered whether the claims are directed to one of the four statutory classes of invention. In the instant case, claims 1-6 and 8-14 are directed to a method, claims 15-17 are directed towards a system, and claims 18-20 are directed to a non-transitory computer readable storage medium each of which falls within one of the four statutory categories of inventions (process/apparatus). Accordingly, the claims will be further analyzed under revised step 2 of the Subject Matter Eligibility Analysis: Under step 2A (prong 1) of the Subject Matter Eligibility Test, it must be considered whether the claims recite a judicial exception if so, then determine in Prong Two if the recited judicial exception is integrated into a practical application of that exception. If the claim recites a judicial exception (i.e., an abstract idea), the claim requires further analysis in Prong Two. Under the Subject Matter Eligibility Analysis, certain methods of organizing human activity include fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) and mathematical concepts include mathematical relationships, mathematical formulas or equations, mathematical calculations. See MPEP § 2106(a)(2). Regarding representative independent claim 1, the abstract idea includes: receiving, …, a first descriptive textual data from a first source database of a first enterprise, the first descriptive textual data comprising a first list of items of inventory…; receiving,… , a second descriptive textual data from a second source database of a second enterprise, the second descriptive textual data comprising a second list of items of inventory…; inputting the first list of items and the second list of items to …, based on the each of the items in the first and second list, a vector of item scores, each value of the vector of item scores being representative of a degree that a word of an item of the list corresponds to a first candidate item from a plurality of candidate items; receiving, …, the vector of item scores; for each item of the first and second list, determining a highest item score of the vector for that item; selecting, for each item of the first and second list, a inventory item of the plurality of candidate items, the inventory item being the item associated with the highest item score; accessing customer profile data of a user …, the customer profile data representing purchasing preferences of the user for at least one human characteristic; calculating, for each of the plurality of inventory items, based on the vector of item scores for that inventory item, a vector of human characteristic scores, each value of the vector of human characteristic scores being representative of a degree that the inventory item corresponds to a human characteristic represented in the customer profile data; generating a feature representation for each of the inventory items based on the vector of item scores and the vector of human characteristic scores for that inventory item; determining, based on the feature representation, a customer representation; calculating, for each of the plurality of inventory items, a dot product of the feature representation of that inventory item and the customer representation; determining, for each of the plurality of inventory items, a predicted affinity score based on the dot product of that inventory item; determining, for each of the plurality of inventory items, an observed affinity score based on historical inventory order data; generating, for each of the plurality of inventory items, an optimized affinity score by minimizing a mean square error between the predicted affinity score and an observed affinity score; and outputting, …, a recommended item of the inventory …for the user, the recommended item having a highest optimized affinity score of the plurality of inventory items, the optimized affinity score representing a degree that the descriptive textual data corresponds to a human characteristic score of the user. This arrangement amounts to certain methods of organizing human activity associated with sales activities and commercial interactions involving item recommendations based on the highest calculated scores for lists of items, determining customer representations based on features representations of items, and determining scores based on historical inventory data. This arrangement amounts to mathematical concepts regarding inputting text data to calculate vector scores, calculating dot product, generating scores, and minimizing mean square error. Such concepts have been considered ineligible certain methods of organizing human activity and mathematical concepts by the Courts. See MPEP § 2106. The revised Step 2A (prong 2) of the Subject Matter Eligibility Analysis, is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. See MPEP § 2106. In this instance, the claims recite the additional elements such as: A computer-implemented method for encoding descriptive textual data, the computer-implemented method comprising (Claim 1): receiving, over a network, …in an inventory management system;… over the network… in the inventory management system; …into a machine learning model, machine learning model trained to output; …, as output from the machine learning model,…; …of a client device…; and …, over the network to a client device, …from the inventory management system (1, 15, and 18); receiving, over the network, a second set of descriptive textual data …(9, 16, & 19); A system for encoding descriptive textual data, the system comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, configure the system to implement:…(Claim 15); A non-transitory computer readable storage medium storing executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising (18); when executed by one or more processors, cause the one or more processors to (Claim 19). However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Independent claims and dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. For example, independent claims and dependent claims are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same. In Step 2A, several additional elements were identified as additional limitations: A computer-implemented method for encoding descriptive textual data, the computer-implemented method comprising (Claim 1): receiving, over a network, …in an inventory management system;… over the network… in the inventory management system; …into a machine learning model, machine learning model trained to output; …, as output from the machine learning model,…; …of a client device…; and …, over the network to a client device, …from the inventory management system (1, 15, and 18); receiving, over the network, a second set of descriptive textual data …(9, 16, & 19); A system for encoding descriptive textual data, the system comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, configure the system to implement:…(Claim 15); A non-transitory computer readable storage medium storing executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising (18); when executed by one or more processors, cause the one or more processors to (Claim 19). These additional limitations, including the limitations in the independent claims and dependent claims, do not amount to an inventive concept because the recitations above do not amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. In addition, they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. For these reasons, the claims are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6 and 8-20 are rejected under 35 U.S.C. 103 as being anticipated by Yap et al. (US Pub. No. 20190362220 A1, hereinafter “Yap”) in view of Malhotra et al . (U.S. Pub. No.: 2018/0365715 A1, hereinafter “Malhotra”). Regarding claims 1, 15, and 18 Yap discloses a computer-implemented method for encoding descriptive textual data, the computer-implemented method comprising (Yap, [0002]: computer-implemented method): receiving, over a network, a first descriptive textual data from a first source database of a first enterprise, the first descriptive textual data comprising a first list of items of inventory in an inventory management system; receiving, over the network, a second descriptive textual data from a second source database of a second enterprise, the second descriptive textual data comprising a second list of items of inventory in the inventory management system (Yap, [0034]: item profile 204 provides a list of item attributes for a particular item (e.g., based on an item-specific identifier); [0058]: item attribute based on the words in the description of the item and top words are provided based on weights; [0028]: items include products and goods; [0065]: a set of items; [0015-0016]: network; [0019]: plurality of items; [0070]: data storage system; [0071]: receive data from memory; FIG. 1, [0015]: client device and server system with server devices; [0016]: client device communicates with server system); inputting the first list of items and the second list of items to into a machine learning model, the machine learning model trained to output, based on each of items in the first and second list, a vector of item scores, each value of the vector of item scores being representative of a degree that a word of an item of the list corresponds to a first candidate item from a plurality of candidate items; receiving, as output from the machine learning model, the vector of item scores; (Yap, [0024]: items are fed into recurrent neural network (RNN) using deep learning and models are trained and encodes these items into a vector and uses the vector to predict the next item that the user is recommended to view; [0019]: As item vector is provided for each item of a plurality of items that could be recommended to the user 110 and includes one or more item attributes to provide a representation of the respective item. The user vector and the item vector are provided as input to the attention-based NCF of the present disclosure, which provides a user latent vector, and an item latent vector, respectively, that are combined and processed to provide a score; [0058]: item attribute based on the words in the description of the item; [0059]: the weighted average of its attributes will produce an item latent vector that is similar to items with similar attributes that have been seen before; [0029]: the attention layer automatically calculates the weighted combination of the vectors; [0031]: input of user and item identifier each encoded as vector; [0014]: for each item in a set of items); for each item of the first and second list, determining a highest item score of the vector for that item; selecting, for each item of the first and second list, a inventory item of the plurality of candidate items, the inventory item being the item associated with the highest item score (Yap, [0066]: outputs the score based on item latent vector that represents compatibility of items with users that is user-item pair; [0046]: after the scores for each user-item pair have been computed, a selection algorithm is used to select the highest-scoring items for each use; [0014]: for each item in a set of items; [0019]: As item vector is provided for each item of a plurality of items that could be recommended to the user 110 which are used to provide item latent vector); accessing customer profile data of a user of a client device, the customer profile data representing purchasing preferences of the user for at least one human characteristic (Yap, [0034]: the user profile 202 provides a list of user attributes for a particular user (e.g., based on a user-specific identifier; [0019]: a user profile can be provided for the user 110, which includes one or more user attributes to provide a representation of the user; [0049]: the user profile table stores multiple attributes about each user (e.g., user identifier, work domain, job position, full time/part time); [0024]: interactions between user and items); calculating, for each of the plurality of inventory items, based on the vector of item scores for that inventory item, a vector of human characteristic scores, each value of the vector of human characteristic scores being representative of a degree that the inventory item corresponds to a human characteristic represented in the customer profile data (Yap, [0029]: The user vector, and the item vector are input into multiple fully connected feed forward layers before the final output layer predicts a single scalar value as the compatibility score between the user and the item; [0030]: calculates an inner product of the user latent vector and item latent vector in order to estimate the compatibility score of each user-item pair; [0031] The input of NCF is a unique identifier assigned to a user (user identifier), and a unique identifier assigned to an item (item identifier), each encoded as a one-hot vector. The user vector and item vector are concatenated, and fed into a multi-layer feed forward neural architecture to output prediction score that estimates the compatibility between the given user and item; [0037]: provides a list of user attributes for a particular user (e.g., based on a user-specific identifier), and the item profile 204 provides a list of item attributes for a particular item (e.g., based on an item-specific identifier) and provide respective values for each attribute resulting in the user vector 206, and the item vector 208 and the user vector 206, and the item vector 208 are processed to provide the user latent vector 214, and the item layer vector 216, respectively; [0042]: given U users and I items to recommend, U×I inference steps need be run to obtain the compatibility score between each user, and each item before retrieving the top items for each user; [0019]: a user profile includes one or more user attributes to provide a representation of the user; [0034]: the item profile 204 provides a list of item attributes for a particular item (e.g., based on a item-specific identifier; [0015]: the user 110 can include a user, who interacts with an application that is hosted by the server system; [0049]: attributes about each user (e.g., user identifier, work domain, job position, full time/part time); [0024]: interactions between user and items); generating a feature representation for each of the inventory items of based on the vector of item scores and the vector of human characteristic scores for that inventory item; determining, based on the feature representation, a customer representation (Yap, [0035]: constructs a user vector representation and an item vector representation; [0022]: Given some metadata about a user or item (e.g., user or item attributes), a feature vector representing a single user or item can be constructed; [0026]: The attention layer performs respective weighted combinations to obtain a user representation, and an item representation; [0019]: includes one or more user attributes to provide a representation of the user and item attributes that provide representation of items); calculating, for each of the plurality of inventory items, a dot product of the feature representation of that inventory item and the customer representation; determining, for each of the plurality of inventory items, a predicted affinity score based on the dot product of that inventory item (Yap, [0035]: calculate a scalar du m for each user attribute using a dot product with each user attribute and for items and final attention weights are for each attribute using all scalars, where each user and item is represented with list of user attributes and item attributes represented by user and item vectors that are used to result in user matrix and item matrix); and outputting, over the network to the client device, a recommended item of the inventory from the inventory management system for the user, the recommended item having a highest optimized affinity score of the plurality of inventory items, the optimized affinity score representing a degree that the descriptive textual data corresponds to a human characteristic score of the user (Yap, [0024]: generate recommendations for products; [0041]: recommending content to a user based on a user's topic of interest and age; [0042]: recommend, U×I inference steps need be run to obtain the compatibility score between each user, and each item before retrieving the top items for each user; [0066]: the higher the score, the more compatible the item is to the user; [0067]: The top X items are displayed to the user having higher scores; [0072]: the features can be implemented on a computer having a display device; [0029]: The user vector, and the item vector are input into multiple fully connected feed forward layers before the final output layer predicts a single scalar value as the compatibility score between the user and the item; [0052]: the full set of items is used as candidates for recommendation during testing where each model calculates compatibility score of all items and scores are used to rank the items; [0019]: as item vector is provided for each item of a plurality of items that could be recommended to the user and item vector and user vector combined to provide score representing relevance of item to user based on item and user attributes; [0028]: recommending products/goods). Yap does not teach: determining, for each of the plurality of inventory items, an observed affinity score based on historical inventory order data; generating, for each of the plurality of inventory items, an optimized affinity score by minimizing a mean square error between the predicted affinity score and an observed affinity score. However, Malhotra teaches: determining, for each of the plurality of inventory items, an observed affinity score based on historical inventory order data; generating, for each of the plurality of inventory items, an optimized affinity score by minimizing a mean square error between the predicted affinity score and an observed affinity score (Malhotra, [0027]: In order to determine purchase pattern of a customer, the data analytics server 101 collects (302) purchase history of the customer as input; [0020]: derive, by processing the collected purchase history, a combined model that is built by combining temporal and aggregate models generated based on the purchase history and use a Mixture of Experts (ME) to combine the temporal and aggregate features (e.g. number of different items a customer purchases) so as to generate a combined model, which in turn is used to classify the customer as repeating or non-repeating customer; FIG. 6(a), [0033]: ME model has lower mean squared error (MSE) compared to the MSEs of the individual models based on different items a customer purchases weekly; [0044]: enable percolation of features extracted from the transaction history of the customers to the aggregate model, temporal model and ME models; [0034]: if the value is found to be less than that of the reference threshold, then the prediction engine classifies (408) the customer as a non-repeating customer and these values change dynamically; [0037]: ME models used give probabilities as values for customer being a repeating customer;). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the plurality of inventory items of Yap with determining an observed affinity score based on historical inventory order data; generating an optimized affinity score by minimizing a mean square error between the predicted affinity score and an observed affinity score as taught by Malhotra because the results of such a modification would be predictable. Specifically, Yap would continue to teach the plurality of inventory items except that determining an observed affinity score based on historical inventory order data; generating an optimized affinity score by minimizing a mean square error between the predicted affinity score and an observed affinity score for each of the plurality of inventory items is taught according to the teachings of Malhotra to predict purchase behavior of customers. This is a predictable result of the combination. (Malhotra, [0003-0006]). Regarding claim 2 The combination of Yap and Malhotra teaches the method of claim 1, further comprising, generating a plurality of feature representations for each respective candidate item of the plurality of candidate items and human characteristics for the respective candidate item (Yap, [0024]: generate recommendations for products; [0041]: recommending content to a user based on a user's topic of interest and age; [0067]: The top X items are displayed to the user; [0072]: , the features can be implemented on a computer having a display device; [0035]: constructs a user vector representation and an item vector representation; [0022]: Given some metadata about a user or item (e.g., user or item attributes), a feature vector representing a single user or item can be constructed; [0026]: The attention layer performs respective weighted combinations to obtain a user representation, and an item representation; [0029]: The user vector, and the item vector are input into multiple fully connected feed forward layers before the final output layer predicts a single scalar value as the compatibility score between the user and the item; [0052]: the full set of items is used as candidates for recommendation during testing where each model calculates compatibility score of all items). Regarding claim 3 The combination of Yap and Malhotra teaches the method of claim 1, further comprising: for a first value of the vector of item scores, determining a first vector of human characteristic scores; and for a second value of the vector of item scores, determining a second vector of human characteristic scores (Yap, [0014]: a plurality of user attributes, each user attribute having a value assigned thereto, the user vector being representative of a user, determining a user latent vector by processing the user vector; [0034]: provide respective values for each attribute resulting in the user vector 206, and the item vector; [0044]: provide recommendations for a single user, values would be input into the score calculation, where the user latent vector has to be duplicated by I times; [0049]: the user profile table stores multiple attributes about each user (e.g., user identifier, work domain, job position, full time/part time). The values of the user attributes are tokenized, and used as input to each evaluated model; [0041]: recommending content to a user based on a user's topic of interest and age). Regarding claim 4 The combination of Yap and Malhotra teaches the method of claim 3, wherein generating the feature representation comprises concatenating the first vector of human characteristic scores with the second vector of human characteristic scores (Yap, [0024]: generate recommendations for products; [0041]: recommending content to a user based on a user's topic of interest and age; [0066]: latent vectors are concatenated by the concatenation which extract higher order features, and learn relationships between the user; [0023]: concatenates both vectors to be input into a multilayer feed forward neural network). Regarding claim 5 The combination of Yap and Malhotra teaches the method of claim 3, wherein each value of the vector of item scores represents a respective feature category of a plurality of feature categories (Yap, [0067]: rank items based on low scores and high scores; [0052]: top k recommendations based on scores to determine good recommendations; [0046]: after the scores for each user-item pair have been computed, select the highest-scoring items for each user). Regarding claim 6 The combination of Yap and Malhotra teaches the method of claim 5, wherein determining the first vector of human characteristic scores comprises: retrieving, from a database of importance measures, a first importance measure corresponding to the feature category, wherein the first importance measure comprises a first weight for a first human characteristic corresponding to the feature category; - 36 -FW Dkt Ref 27915-45353 assigning the first importance measure to a first value of the first vector of human characteristic scores; retrieving, from the database of importance measures, a second importance measure corresponding to the feature category, wherein the second importance measure comprises a second weight for a second human characteristic corresponding to the feature category; and assigning the second importance measure to a second value of the first vector of human characteristic scores (Yap, [0003]: the one or more items are selected based on respective user-item scores; [0026]: An attention layer automatically learns the importance of each user attribute, and each item attribute; [0027]: The attention-based NCF model also provides a level of traceability of the importance of factors considered when items are recommended; [0041]: the weights can be used to trace the attributes that the model learns are important; [0041]: recommending content to a user based on a user's topic of interest and age; [0044]: provide recommendations for a single user, values would be input into the score calculation,; [0049]: the user profile table stores multiple attributes about each user; [0046]: after the scores for each user-item pair have been computed, a selection algorithm is used to select the highest-scoring items for each user; [0071]: receive data from memory). Regarding claim 8 The combination of Yap and Malhotra teaches the method of claim 1, further comprising: determining that a predicted affinity for the first candidate item is within a range of a predicted affinity for a second candidate item; and determining that the recommendation comprises recommendations for both the first candidate item and the second candidate item (Yap, [0020]: a recommender system can return ranked items to the user; [0022]: similarity measures used to find similar items; [0059]: the weighted average of its attributes will produce an item latent vector that is similar to items with similar attributes that have been seen before. Therefore, the attention-based NCF model is able to provide a relatively good prediction for the new item; [0052]: the full set of items is used as candidates for recommendation). Regarding claims 9, 16, and 19 The combination of Yap and Malhotra teaches the method of claim 1, wherein the descriptive textual data is a first set of descriptive textual data and the source database is a first source database, further comprising: receiving, over the network, a second set of descriptive textual data from a second source database; and determining that the first set of descriptive textual data and the second set of descriptive textual data refer to a single candidate item (Yap,[0035]: An attribute set D is provided, which records the attributes for users and items; [0031]: input of user and item identifier each encoded as vector; [0029] and [0039]: each user attribute corresponds to a single vector in a user matrix, and each item attribute corresponds to a single vector in an item matrix; [0034]: the user profile 202 provides a list of user attributes for a particular user (e.g., based on a user-specific identifier), and the item profile 204 provides a list of item attributes for a particular item (e.g., based on an item-specific identifier; [0015-0016]: network; [0018]: data store; [0071]: receive data from memory). Regarding claims 10, 17, and 20 The combination of Yap and Malhotra teaches the method of claim 9, wherein the feature representation is a first feature representation, and wherein determining that the first set of descriptive textual data and the second set of descriptive textual data refer to the single candidate item comprises: calculating a cosine similarity of the first feature representation and a second feature representation; and determining, based on the cosine similarity, a similarity value indicative of a degree of similarity between a first item associated with the first set of descriptive textual data and a second item associated with the second set of descriptive textual data (Yap, [0022]: Several similarity measures (e.g., cosine similarity) can be applied to the vectors to find similar users or items; [0026]: This induces a similar vector space representation for similar users (items). An attention layer automatically learns the importance of each user attribute, and each item attribute. The attention layer performs respective weighted combinations to obtain a user representation, and an item representation;[0003]: select one or more items from the set of items to recommend to the user based on scores; [0040]: a weighted sum over user attributes, and item attributes to respectively represent users, and items with similar attributes have similar vectors; [0035]: An attribute set D is provided, which records the attributes for users and items; [0031]: input of user and item identifier each encoded as vector; [0029] and [0039]: each user attribute corresponds to a single vector in a user matrix, and each item attribute corresponds to a single vector in an item matrix; [0034]: provides a list of item attributes for a particular item (e.g., based on an item-specific identifier; [0015-0016]: network; [0018]: data store; [0071]: receive data from memory). Regarding claim 11 The combination of Yap and Malhotra teaches the method of claim 1, wherein each entry of the list corresponds to an inventory item (Yap, [0028]: recommending any appropriate type of content (e.g., products, goods); [0031]: The input of NCF is a unique identifier assigned to an item (item identifier); [0018]: data store; [0069]: memory). Regarding claim 12 The combination of Yap and Malhotra teaches the method of claim 11, further comprising: determining a feature representation for an inventory category; and comparing each feature representation of a plurality of feature representations to the feature representation for the inventory category, wherein each of the plurality of feature representations correspond to a respective plurality of items in the list of inventory (Yap, [0019]: the item vector includes one or more item attributes to provide a representation of the respective item; [0022]: a feature vector representing a single item; [0059]: item latent vectors provided in accordance with the present disclosure can be clustered within the same vector space to show that similar sentences are clustered into the same cluster; [0060]: , K=100 for the number of clusters, and the item attributes (learning course descriptions) are inspected for each cluster showing the top-3; [0024]: generate recommendations for products; [0040]: items with similar attributes to have similar vectors (due to the weighted sum) compared to items with different attributes). Regarding claim 13 The combination of Yap and Malhotra teaches the method of claim 12, further comprising categorizing, based on the comparisons, a set of the plurality of items as belonging to the inventory category (Yap , [0024]: Items (e.g. products) viewed by the user are sorted by chronological order; [0058]: (item), each attribute is sorted based on its attention weights, in this case, the words in the description of the item, and the top 5 words are provided; [0040]: items with similar attributes to have similar vectors (due to the weighted sum) compared to items with different attributes). Regarding claim 14 The combination of Yap and Malhotra teaches the method of claim 12, further comprising ranking, based on the comparisons, a set of the plurality of items as having a respective plurality of similarity values within a range of a similarity value of a target item in the inventory category (Yap, [0052]: calculates the compatibility score of each item for all items. The scores are used to rank the items; [0046]: select the highest-scoring items for each user; [0040]: items with similar attributes to have similar vectors (due to the weighted sum) compared to items with different attributes). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is cited as Divakaran et al. (US Pub. No. 2016/0063692 A1) related to semantics and encoded words cited on the IDS on 05/18/2020 and non-patent literature related to item recommendations using learned similarity scores as Reference-U on PTO-892. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey A. Smith can be reached on (571) 272-6763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LATASHA D RAMPHAL/Examiner, Art Unit 3688 /Jeffrey A. Smith/Supervisory Patent Examiner, Art Unit 3688
Read full office action

Prosecution Timeline

Jan 13, 2020
Application Filed
Mar 09, 2020
Response after Non-Final Action
Jul 27, 2022
Non-Final Rejection — §101, §103, §112
Feb 03, 2023
Response Filed
May 15, 2023
Final Rejection — §101, §103, §112
Nov 20, 2023
Request for Continued Examination
Nov 21, 2023
Response after Non-Final Action
Nov 28, 2023
Non-Final Rejection — §101, §103, §112
Jun 06, 2024
Response Filed
Oct 05, 2024
Final Rejection — §101, §103, §112
Apr 09, 2025
Request for Continued Examination
Apr 10, 2025
Response after Non-Final Action
Jun 12, 2025
Non-Final Rejection — §101, §103, §112
Sep 10, 2025
Examiner Interview Summary
Sep 10, 2025
Applicant Interview (Telephonic)
Sep 17, 2025
Response Filed
Dec 22, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572964
NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM AND SYSTEM PERFORMING SPECIFIC PROCESS WHICH ENABLES PAYMENT OF CHARGE OF ARTICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12572934
SYSTEM AND METHOD FOR IMPLEMENTING AN EDGE QUEUING PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12561750
Barmaster Drink Delivery System
2y 5m to grant Granted Feb 24, 2026
Patent 12555149
QUEUE MANAGEMENT DEVICE FOR PROVIDING INFORMATION ABOUT ACCESS WAITING SCREEN AND METHOD THEREOF
2y 5m to grant Granted Feb 17, 2026
Patent 12548058
RETAIL STORE MOTION SENSOR METHODS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
34%
Grant Probability
83%
With Interview (+49.0%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 193 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month