Prosecution Insights
Last updated: April 19, 2026
Application No. 18/643,890

GENERATING ITEM REPLACEMENTS USING MACHINE LEARNING BASED LANGUAGE MODELS

Final Rejection §101§103
Filed
Apr 23, 2024
Examiner
KANG, TIMOTHY J
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Maplebear Inc.
OA Round
2 (Final)
46%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
72%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
129 granted / 280 resolved
-5.9% vs TC avg
Strong +26% interview lift
Without
With
+26.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
49 currently pending
Career history
329
Total Applications
across all art units

Statute-Specific Performance

§101
45.8%
+5.8% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 280 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Status of Claims Claims 1-20 remain pending, and are rejected. Response to Arguments Applicant’s arguments filed on 12/12/2025 with respect to the rejection under 35 U.S.C. 101 have been fully considered, but are not persuasive for at least the following rationale: Applicant’s arguments filed on 12/12/205 with respect to the rejection under 35 U.S.C. 101 for claims directed to a judicial exception are not persuasive: Notably, on pages 13-14 of the Applicant’s Remarks, arguments are made that the additional elements of the claims are directed to a specific technological solution that address the technical problem of providing real-time, automated, and interpretable explanations for item replacement decisions when an originally requested item is unavailable. The Applicant argues that the generating of prompts and providing of the prompts to large language models to generate natural language explanations is not a fundamental economic practice or mental process, and is a technical process that improves automated reasoning and explanation generation within a computing environment. Examiner respectfully disagrees. Providing explanations for item replacement decisions when an originally requested item is unavailable is not a technical problem. This represents determining replacement items with a reasons for why it is a suitable or unsuitable replacement when items are unavailable, which is a sales activity of providing alternate items to purchase, and merely automating the sales activity on a computing device. The use of a large language model is merely applied to the abstract idea to provide an output of information for the abstract idea, but the technical functionality of a large language model, and how it is changed or improved, is not recited in the claims. The claims only recite the abstract process of scoring items for replacement and receiving an explanation. The LLM merely functions as a black box by receiving some input of data of the abstract idea, and outputting data, without any recitation of how LLMs function at a technical level. It is evident in the specification that the LLM is not a particular artificial intelligence, but is any generic LLM that is applied to the abstract idea. For example, specification paragraphs [0030-0031] disclose generic details of the LLM, such as that it may have a generative pre-training architecture, or may be configured as any other appropriate architecture including LSTM, Markov, BART, GAN, diffusion models, and the like. As such, the claims are not directed to any technical processes or improving any computing technology, but is directed to the sales activity of determining replacement items with an explanation, and merely applies generic technology to the abstract idea. In view of the above, the rejection under 35 U.S.C. 101 has been maintained below. Applicant’s arguments filed on 12/12/2025 with respect to the rejection under 35 U.S.C. 103 have been fully considered, but are moot in light of new grounds of rejection. Applicant’s amendments necessitated new grounds of rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claims are directed to a judicial exception without significantly more. Step 1: Claims 1-8 are directed to a method, which is a process. Claims 9-16 are directed to a non-transitory computer readable storage medium, which is an article of manufacture. Claims 17-20 are directed to a system, which is an apparatus. Therefore, claims 1-20 are directed to one of the four statutory categories of invention. Step 2A (Prong 1): Taking claim 17 as representative, claim 17 sets forth the following limitations reciting the abstract idea of determining a replacement score for an item and providing an explanation: receiving a request for a first item; identifying that the firs item is not available; identifying a second item as a replacement for the first item; identifying a replacement score for the second item as the replacement for the first item; identifying that the second item has a replacement score below a threshold value, indicating a below threshold quality of replacement; generating a prompt, the prompt including information describing the first item and the second item, and a request for an explanation of a reason that the second item having the replacement score below the threshold value is a poor replacement for the first item; receiving a response to the prompt, the response comprising a natural-language explanation of why the second item is a poor replacement for the first item; sending the natural-language explanation response to a user for display. The recited limitations above set forth the process for determining a replacement score for an item and providing an explanation. These limitations amount to certain methods of organizing human activity, including commercial or legal transactions (e.g. agreements in the form of contracts, advertising, marketing or sales activities or behaviors, etc.). The claims are directed to identifying items that are unavailable and determining replacement scores for the items compared to a threshold and providing an explanation for how suitable the replacement item is (see specification [0003] disclosing the problem of poor user experience and delays in completing transactions when the replacement item is not suitable), which is an advertising and marketing activity. Such concepts have been identified by the courts as abstract ideas (see: MPEP 2106.04(a)(2)). Step 2A (Prong 2): Examiner acknowledges that representative claim 17 recites additional elements, such as: one or more computer processors; a non-transitory computer readable storage medium; a machine learning based language model; a client device; generating a prompt for a machine learning based large language model; providing the prompt to the machine learning based large language model; Taken individually and as a whole, representative claim 17 does not integrate the recited judicial exception into a practical application of the exception. The additional elements do no more than generally link the use of a judicial exception to a particular technological environment or field of use. Furthermore, this is also because the claim fails to (i) reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field, (ii) implement a judicial exception with a particular machine, (iii) effect a transformation or reduction of a particular article to a different state or thing, or (iv) apply the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. While the claims recite one or more processors and a non-transitory computer readable medium, these elements are recited at a very high level of generality as merely performing the steps of the abstract idea. The specification does not disclose the processor with any particularity, merely reciting that the processor comprises one or more processing units to perform the steps of instructions (specification: [0072]), and discloses the non-transitory computer readable storage medium as any embodiment of a computer program product (specification: [0073]). It is evident that these components are any generic computing component that is merely leveraged to perform the abstract idea within a computing environment, but do not offer any other meaningful limitation. The machine learning based language model is also not disclosed with any particularity, such as being any of regression models, support vector machines, naïve bayes, decision trees, k nearest neighbors, etc. (specification: [0050]), and the large language model can be any of LSTM networks, Markov networks, BART, GAN, diffusion models, and the like (specification: [0031]). The client device is disclosed in specification paragraph [0017] as a personal or mobile computing device, such as a smartphone, a tablet, a laptop computer, or desktop computer. It is clear that any generic machine learning technique is only applied to the abstract idea to provide an output, and the claims are not directed to any particular ability of machine learning, and the client device merely represents a user, set within a computing environment. The additional elements of the claims only serve to provide a general link to a computing environment. In view of the above, under Step 2A (Prong 2), representative claim 17 does not integrate the recited exception into a practical application (see: MPEP 2106.04(d)). Step 2B: Returning to representative claim 17, taken individually or as a whole, the additional elements of claim 17 do not provide an inventive concept (i.e. whether the additional elements amount to significantly more than the exception itself). As noted above, the additional elements recited in claim 17 are recited in a generic manner with a high level of generality and only serve to implement the abstract idea on a generic computing device. The claims result only in an improved abstract idea itself and do not reflect improvements to the functioning of a computer or another technology or technical field. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements used to perform the claimed process ultimately amount to no more than the mere instructions to apply the exception using a generic computer and/or no more than a general link to a technological environment. Even when considered as an ordered combination, the additional elements of claim 17 do not add anything further than when they are considered individually. In view of the above, claim 17 does not provide an inventive concept under step 2B, and is ineligible for patenting. Regarding Claim 1 (method): Claim 1 recites at least substantially similar concepts and elements as recited in claim 17 such that similar analysis of the claims would be readily apparent to one of ordinary skill in the art. As such, claims 1 is rejected under at least similar rationale as provided above regarding claim 17. Regarding Claim 9 (non-transitory computer readable storage medium): Claim 9 recites at least substantially similar concepts and elements as recited in claim 17 such that similar analysis of the claims would be readily apparent to one of ordinary skill in the art. As such, claims 9 is rejected under at least similar rationale as provided above regarding claim 17. Dependent claims 2-8, 10-16, and 18-20 recite further complexity to the judicial exception (abstract idea) of claim 17, such as by further defining the algorithm of determining a replacement score for an item and providing an explanation, and do not recite any further additional elements. Thus, each of claims 2-8, 10-16, and 18-20 are held to recite a judicial exception under Step 2A (Prong 1) for at least similar reasons as discussed above. Under prong 2 of step 2A, the additional elements of dependent claims 2-8, 10-16, and 18-20 also do not integrate the abstract idea into a practical application, considered both individually or as a whole. More specifically, dependent claims 2-8, 10-16, and 18-20 rely on at least similar elements as recited in claim 17. Further additional elements are also acknowledged; however, the additional elements of claims 2-8, 10-16, and 18-20 are recited only at a high level of generality (i.e. as generic computing hardware) such that they amount to nothing more than the mere instructions to implement or apply the abstract idea on generic computing hardware (or, merely uses a computer as a tool to perform an abstract idea). Further, the additional elements do no more than generally link the use of a judicial exception to a particular technological environment or field of use (such as the Internet or computing networks). Secondly, this is also because the claims fails to (i) reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field, (ii) implement the judicial exception with, or use the judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, (iii) effect a transformation or reduction of a particular article to a different state or thing, or (iv) applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Taken individually and as a whole, dependent claims 2-8, 10-16, and 18-20 do not integrate the recited judicial exception into a practical application of the exception under step 2A (prong 2). Lastly, under step 2B, claims 2-8, 10-16, and 18-20 also fail to result in “significantly more” than the abstract idea under step 2B. The dependent claims recite additional functions that describe the abstract idea and use the computing device to implement the abstract idea, while failing to provide an improvement to the functioning of a computer, another technology, or technical field. The dependent claims fail to confer eligibility under step 2B because the claims merely apply the exception on generic computing hardware and generally link the exception to a technological environment. Even when viewed as an ordered combination (as a whole), the additional elements of the dependent claims do not add anything further than when they are considered individually. Taken individually or as an ordered combination, the dependent claims simply convey the abstract idea itself applied on a generic computer and are held to be ineligible under Steps 2B for at least similar rationale as discussed above regarding claim 17. Thus, dependent claims 2-8, 10-16, and 18-20 do not add “significantly more” to the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6-9, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable by Pawar (US 20220114640 A1) in view of Alkan (US 20240144346 A1), and in further view of Glesinger (US 20230140125 A1). Regarding Claim 1: Pawar discloses a method comprising: receiving a request for a first item; (Pawar: [0040] – “The online concierge system 102 receives 402 a delivery order that includes a set of items and a delivery location. The delivery location may be any location associated with a customer, such as a customer's home or office. The delivery location may be stored with the customer location in the customer database 214. Based on the delivery order, the online concierge system 102 identifies a warehouse 404 for picking the set of items in the delivery order based on the set of items and the delivery location”). identifying that the first item is not available; (Pawar: [0056] – “One or more of the items included in the order may have limited inventory at the warehouse identified by the order. To account for an item included in the order being unavailable at the warehouse identified by the order, the online concierge system 102 allows the customer 104 to specify a replacement item for an item in the order, authorizing a shopper 108 to obtain the replacement item if the item is unavailable at the warehouse identified by the order”). identifying a replacement score for the second item as the replacement for the first item; (Pawar: [0061] – “the online concierge system 102 generates a replacement score for an additional item replacing the specific item by combining numbers of times the additional item has been selected as replacements from connections between items in the item graph and numbers of connections between the specific item and the additional item. The online concierge system 102 weights a number of times an additional item has been selected to replace and item by a value that is inversely related to a number of connections between the specific item and the additional item, attenuating numbers of times the additional item has been selected when the additional item is indirectly connected to the specific item via the item graph”). identifying that the second item has a replacement score below a threshold value, indicating a below threshold quality of replacement; (Pawar: [0061] – “the online concierge system 102 ranks the alternative items based on their replacement scores and selects 625 an alternative item having at least a threshold position (e.g., a maximum position) in the ranking as the replacement item for the specific item”; Pawar: [0064] – “transmits 630 information identifying replacement products having at least a threshold replacement score”). Pawar does not explicitly teach a method comprising: generating a prompt for a machine learning based large language model, the prompt including information describing the first item and the second item, and a request for an explanation of a reason that the second item having the replacement score below the threshold value, is a poor replacement for the first item; providing the prompt to the machine learning based large language model; receiving a response to the prompt from the machine learning based large language model, the response comprising a natural-language explanation of why the second item is a poor replacement for the first item; sending the natural-language explanation to a client device of a user for display. Notably, however, Pawar does disclose a machine-learned replacement model (Pawar: [0062]), and the replacement score meeting at least a threshold value such that items below the threshold are not selected (Pawar: [0064]). To that accord, Alkan does teach a method comprising: generating a prompt for a machine learning based language model, the prompt including information describing the first item and the second item, and a request for an explanation of a reason that the second item having the replacement score below the threshold value, is a poor replacement for the first item; (Alkan: [0030] – “the explanation generator can include a machine learning (ML) algorithm operable to receive the stream of recommendation/explanation pairs from the interactions manager module. The ML algorithm is trained to perform the task of automatically and dynamically generating a format/type and component combination (or a format/type alone) for the received explanation”; Alkan: [0027] – “examples of why-not explanations formats/types include “I recommend Movie A and not Movie B because although you like the main actor in Movie B, Movie B is a fantasy and you do not like fantasies.””). In summary, recommendation pairs (first item and second item) are sent to the machine learning algorithm for an explanation., which can include why-not explanations. providing the prompt to the machine learning based language model; (Alkan: [0030] – “the explanation generator can include a machine learning (ML) algorithm operable to receive the stream of recommendation/explanation pairs from the interactions manager module”). receiving a response to the prompt from the machine learning based language model, the response comprising a natural-language explanation of why the second item is a poor replacement for the first item; (Alkan: [0030] – “the ML algorithm to refine generated explanation formats/types and component combination to evaluate in real time whether or not explanation format/type and component combinations are meeting the goals/interests of application owners and make adjustments to generated explanation format/type and components combinations in real time when needed”; Alkan: [0027] – “examples of why-not explanations formats/types include “I recommend Movie A and not Movie B because although you like the main actor in Movie B, Movie B is a fantasy and you do not like fantasies.””). sending the natural-language explanation to a client device of a user for display. (Alkan: [0023]- “if a user uses a streaming movie application to watch Movie A, the streaming movie application can generate and display a list of recommended movies (e.g., Movies B-G) to the user accompanied by an explanation of why the movies are being recommended to the user”; Alkan: Fig. 2C showing the user interface displaying the reasons for the recommendation). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Pawar disclosing a system for scoring and selecting replacement items for an unavailable item with the generating and providing of a prompt of the first and second items to receive a response comprising a natural language explanation of why the second item is a poor match as taught by Alkan. One of ordinary skill in the art would have been motivated to do so in order to assist and influence decision making processes (Alkan: [0002]). Pawar in view of Alkan does not explicitly teach large language model; Notably, however, Pawar does disclose a machine-learned replacement model (Pawar: [0062]), and Alkan does teach the explanation generator including a machine learning algorithm (Alkan: [0030]). To that accord, Glesinger does teach large language model; (Glesinger: [0336] – “Neural network-based trained large language models may be applied to generate the elements of the semantic levels and/or through application of semantic chaining processes”; Glesinger: [0218] – “the computer-based application 925 may deliver a corresponding explanation 250c of why the object was recommended”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Pawar in view of Alkan disclosing a system for scoring and selecting replacement items for an unavailable item with the large language model as taught by Glesinger. One of ordinary skill in the art would have been motivated to do so in order to give the recipient a better sense of whether to interact with the item (Glesinger: [0218]). Regarding Claim 6: Pawar in view of Alkan and Glesinger discloses the limitations of claim 1 above. Pawar does not explicitly teach a method comprising: generating a second prompt requesting one or more possible replacement items for the first item; providing the prompt to the machine learning based language model; receiving a second response to the second prompt from the machine learning based language model; extracting one or more replacement items from the second response. Notably, however, Pawar does disclose a machine-learned replacement model (Pawar: [0062]), and the replacement score meeting at least a threshold value such that items below the threshold are not selected (Pawar: [0064]). Furthermore, Pawar discloses identifies more than one alternative item, such that the items may be ranked (performing the steps for each alternate item) (Pawar: [0061]). To that accord, Alkan does teach a method comprising: generating a second prompt requesting one or more possible replacement items for the first item; (Alkan: [0030] – “the explanation generator can include a machine learning (ML) algorithm operable to receive the stream of recommendation/explanation pairs from the interactions manager module. The ML algorithm is trained to perform the task of automatically and dynamically generating a format/type and component combination (or a format/type alone) for the received explanation”; Alkan: [0027] – “examples of why-not explanations formats/types include “I recommend Movie A and not Movie B because although you like the main actor in Movie B, Movie B is a fantasy and you do not like fantasies.””). In summary, recommendation pairs (first item and second item) are sent to the machine learning algorithm for an explanation., which can include why-not explanations. providing the prompt to the machine learning based language model; (Alkan: [0030] – “the explanation generator can include a machine learning (ML) algorithm operable to receive the stream of recommendation/explanation pairs from the interactions manager module”). receiving a second response to the second prompt from the machine learning based language model; (Alkan: [0030] – “the ML algorithm to refine generated explanation formats/types and component combination to evaluate in real time whether or not explanation format/type and component combinations are meeting the goals/interests of application owners and make adjustments to generated explanation format/type and components combinations in real time when needed”; Alkan: [0027] – “examples of why-not explanations formats/types include “I recommend Movie A and not Movie B because although you like the main actor in Movie B, Movie B is a fantasy and you do not like fantasies.””). extracting one or more replacement items from the second response. (Alkan: [0023] – “the streaming movie application can generate and display a list of recommended movies (e.g., Movies B-G) to the user accompanied by an explanation of why the movies are being recommended to the user”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Pawar disclosing a system for scoring and selecting replacement items for an unavailable item with the generating and providing of a prompt of the first and second items to receive a response comprising a natural language explanation of why the second item is a poor match as taught by Alkan. One of ordinary skill in the art would have been motivated to do so in order to assist and influence decision making processes (Alkan: [0002]). Regarding Claim 7: Pawar in view of Alkan and Glesinger discloses the limitations of claim 1 above. Pawar does not explicitly teach a method comprising: generating a second prompt requesting one or more possible replacement items for the first item; providing the prompt to the machine learning based language model; receiving a second response to the second prompt from the machine learning based language model; identifying whether the second item is a good replacement for the first item based on the second response. Notably, however, Pawar does disclose a machine-learned replacement model (Pawar: [0062]), and the replacement score meeting at least a threshold value such that items below the threshold are not selected (Pawar: [0064]). Furthermore, Pawar discloses identifies more than one alternative item, such that the items may be ranked (performing the steps for each alternate item) (Pawar: [0061]). To that accord, Alkan does teach a method comprising: generating a second prompt requesting one or more possible replacement items for the first item; (Alkan: [0030] – “the explanation generator can include a machine learning (ML) algorithm operable to receive the stream of recommendation/explanation pairs from the interactions manager module. The ML algorithm is trained to perform the task of automatically and dynamically generating a format/type and component combination (or a format/type alone) for the received explanation”; Alkan: [0027] – “examples of why-not explanations formats/types include “I recommend Movie A and not Movie B because although you like the main actor in Movie B, Movie B is a fantasy and you do not like fantasies.””). In summary, recommendation pairs (first item and second item) are sent to the machine learning algorithm for an explanation., which can include why-not explanations. providing the prompt to the machine learning based language model; (Alkan: [0030] – “the explanation generator can include a machine learning (ML) algorithm operable to receive the stream of recommendation/explanation pairs from the interactions manager module”). receiving a second response to the second prompt from the machine learning based language model; (Alkan: [0030] – “the ML algorithm to refine generated explanation formats/types and component combination to evaluate in real time whether or not explanation format/type and component combinations are meeting the goals/interests of application owners and make adjustments to generated explanation format/type and components combinations in real time when needed”; Alkan: [0027] – “examples of why-not explanations formats/types include “I recommend Movie A and not Movie B because although you like the main actor in Movie B, Movie B is a fantasy and you do not like fantasies.””). identifying whether the second item is a good replacement for the first item based on the second response. (Alkan: [0027] – “examples of collaborative-based explanation formats/types include “Users who watched this movie also watched . . . ”. In embodiments of the invention, examples of content-based explanation formats/types include “Based on what you've told us so far, we're recommending Movie A because . . . ”. In embodiments of the invention, examples of demographics-based explanation formats/types include “We recommended the Movie A because Movie A is a military movie and you served in the U.S. military.” In embodiments of the invention, examples of pattern-based explanation formats/types include “12% of people who watched Movie A watched Movie B afterwards.” It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Pawar disclosing a system for scoring and selecting replacement items for an unavailable item with the generating and providing of a prompt of the first and second items to receive a response comprising a natural language explanation of why the second item is a good match as taught by Alkan. One of ordinary skill in the art would have been motivated to do so in order to assist and influence decision making processes (Alkan: [0002]). Regarding Claim 8: Pawar in view of Alkan and Glesinger discloses the limitations of claim 1 above. Pawar further discloses a method comprising: identifying a third item as a replacement for the first item; (Pawar: [0061] – “the online concierge system 102 ranks the alternative items based on their replacement scores and selects 625 an alternative item having at least a threshold position”). In summary, Pawar discloses performing the scoring process for multiple items. identifying a second replacement score for the third item as the replacement for the first item; (Pawar: [0064] – “transmits 630 information identifying replacement products having at least a threshold replacement score or having at least a threshold position in a ranking of products based on replacement scores”). In summary, multiple products are calculated for replacement scores. identifying that the third item has the second replacement score above the threshold value, indicating an above threshold quality of replacement; (Pawar: [0064] – “transmits 630 information identifying replacement products having at least a threshold replacement score”). sending a second response to a client device of a user indicating the third item as the replacement item for the first item. (Pawar: [0064] – “transmits 630 information identifying replacement products having at least a threshold replacement score or having at least a threshold position in a ranking of products based on replacement scores”). Regarding Claims 9 and 17: Claims 9 and 17 recite substantially similar limitations as claim 1. Therefore, claims 9 and 17 are rejected under the same rationale as claim 1 above. Regarding Claims 14 and 18: Claims 14 and 18 recite substantially similar limitations as claim 6. Therefore, claims 14 and 18 are rejected under the same rationale as claim 6 above. Regarding Claims 15 and 19: Claims 15 and 19 recite substantially similar limitations as claim 7. Therefore, claims 15 and 19 are rejected under the same rationale as claim 7 above. Regarding Claims 16 and 20: Claims 16 and 20 recite substantially similar limitations as claim 8. Therefore, claims 16 and 20 are rejected under the same rationale as claim 8 above. Claims 2-5 and 10-13 are rejected under 35 U.S.C. 103 as being unpatentable by the combination of Pawar (US 20220114640 A1), Alkan (US 20240144346 A1), and Glesinger (US 20230140125 A1), in view of Oreshkin (US 20210133951 A1). Regarding Claim 2: The combination of Pawar, Alkan, and Glesinger discloses the limitations of claim 1 above. The combination does not explicitly teach a method comprising: receiving an image of the second item; using the image to identify the explanation of why the second item has the replacement score below the threshold value. Notably, however, Pawar does disclose displaying information identifying the specific item, such as an image of the item (Pawar: [0064]), and the replacement score meeting at least a threshold value such that items below the threshold are not selected (Pawar: [0064]), and Alkan does teach the explanation generator including a machine learning algorithm (Alkan: [0030]). To that accord, Oreshkin does teach a method comprising: receiving an image of the second item; (Oreshkin: [0028] – “an input module 110 produces/receives one or more product signatures of the candidate for a potential copy. The input module 110 then passes these one or more product signatures (which may be images)”). using the image to identify the explanation of why the second item has the replacement score below the threshold value. (Oreshkin: [0037] – “in addition to the product signatures of the candidate and the target, the report module may also produce explanations regarding the level of similarity and the system's conclusion about the candidat”; Oreshkin: [0041] – “Based on the level of similarity between the features (detailed by a similarity score in one implementation), a decision is made as to whether the features of the candidate are similar enough to the features of a specific one of the targets. In one implementation, the similarity score is the metric by which level of similarity is assessed. If the similarity score is above a specified, predetermined threshold, then the target and the candidate are determined to be similar enough and may require further action. If this determination is made, the decision making module then passes on the target product features, target product images, candidate product features, and candidate product images (i.e. the product signatures) to an explanation or reporting module 550. This module 550 creates a report that explains why the system determined that the level of similarity between the candidate product and the target product is above the predetermined threshold”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Pawar, Alkan, and Glesinger disclosing the system for scoring replacement items for unavailable items in an order with the receiving of an image and using the image to identify the explanation as taught by Oreshkin. One of ordinary skill in the art would have been motivated to do so in order to determine a level of similarity of the product signatures (Oreshkin: [0004]). Regarding Claim 3: The combination of Pawar, Alkan, and Glesinger, in view of Oreshkin, discloses the limitations of claim 2 above. The combination does not explicitly teach wherein the prompt sent to the machine learning based language model comprises the image, wherein the machine learning based language model generates the response comprising the explanation of why the second item has the replacement score below the threshold value based on information including the image. Notably, however, Pawar does disclose displaying information identifying the specific item, such as an image of the item (Pawar: [0064]), and the replacement score meeting at least a threshold value such that items below the threshold are not selected (Pawar: [0064]), and Alkan does teach the explanation generator including a machine learning algorithm (Alkan: [0030]). To that accord, Oreshkin does teach wherein the prompt sent to the machine learning based language model comprises the image, wherein the machine learning based language model generates the response comprising the explanation of why the second item has the replacement score below the threshold value based on information including the image. (Oreshkin: [0026] – “the system 10 includes a validation module 60 and a continuous learning module 70. The output 50 (not shown in FIG. 2) from the decision module 20 is sent to the validation module 60. The validation module 60 then sends this output (and whatever other data may be necessary) to a user 65 for validation. The user 65 determines if the output from the decision module 20 is correct based on the candidate and the target. The user's decision regarding the validation is then received by the validation module 60. The user's decision is then sent by the validation module 60 to the continuous learning module 70 to be used in training a model used by the decision module 20 to determine the level of similarity between the data sets it receives”; Oreshkin: [0040] – “the data sent to the user can include not just images and/or product signatures of the candidate and of the target but also explanations of the system's conclusion regarding the similarity between the candidate and the target. These explanations can also be automatically generated along with the similarity score”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Pawar, Alkan, and Glesinger disclosing the system for scoring replacement items for unavailable items in an order with the determine the explanation with a machine learning model as taught by Oreshkin. One of ordinary skill in the art would have been motivated to do so in order to determine a level of similarity of the product signatures (Oreshkin: [0004]). Regarding Claim 4: The combination of Pawar, Alkan, and Glesinger, in view of Oreshkin, discloses the limitations of claim 2 above. The combination does not explicitly teach a method comprising: performing optical character recognition on the image to extract a text from the image; wherein the prompt sent to the machine learning based language model comprises the text extracted from the image, wherein the machine learning based language model generates the response comprising the explanation of why the second item has the replacement score below the threshold value based on information including the text extracted from the image. Notably, however, Pawar does disclose displaying information identifying the specific item, such as an image of the item (Pawar: [0064]), and the replacement score meeting at least a threshold value such that items below the threshold are not selected (Pawar: [0064]), and Alkan does teach the explanation generator including a machine learning algorithm (Alkan: [0030]), and providing why-not explanations (Alkan: [0027]). To that accord, Oreshkin does teach a method comprising: performing optical character recognition on the image to extract a text from the image; (Oreshkin: [0045] – “one or more images of candidate indicia (e.g. a trademark or logo) can operate as input to the extraction module. The features of such a candidate indicia can then be extracted. As a variant, the feature extraction process may include extracting any text contained within the indicia and performing an optical character recognition (OCR) process on the extracted text”). wherein the prompt sent to the machine learning based language model comprises the text extracted from the image, wherein the machine learning based language model generates the response comprising the explanation of why the second item has the replacement score below the threshold value based on information including the text extracted from the image. (Oreshkin: [0045] – “one or more images of candidate indicia (e.g. a trademark or logo) can operate as input to the extraction module. The features of such a candidate indicia can then be extracted. As a variant, the feature extraction process may include extracting any text contained within the indicia”; Oreshkin: [0040] – “the data sent to the user can include not just images and/or product signatures of the candidate and of the target but also explanations of the system's conclusion regarding the similarity between the candidate and the target”). Oreshkin teaches using the extracted features to determine similarity and provide an explanation regarding the similarity (Oreshkin: [0045]; [0040]; see also: [0026]; [0028]; [0043]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Pawar, Alkan, and Glesinger disclosing the system for scoring replacement items for unavailable items in an order with the use of OCR to extract features and generate a response as taught by Oreshkin. One of ordinary skill in the art would have been motivated to do so in order to use a product name or company name to cut down the number of comparisons to be made (Oreshkin: [0045]). Regarding Claim 5: The combination of Pawar, Alkan, and Glesinger, in view of Oreshkin, discloses the limitations of claim 2 above. The combination does not explicitly teach a method comprising: processing the image using a machine learning based model to obtain one or more features of the second item from the image; wherein the prompt sent to the machine learning based language model comprises the one or more features of the second item, wherein the machine learning based language model generates the response comprising the explanation of why the second item has the replacement score below the threshold value based on information including the one or more features of the second item. Notably, however, Pawar does disclose displaying information identifying the specific item, such as an image of the item (Pawar: [0064]), and the replacement score meeting at least a threshold value such that items below the threshold are not selected (Pawar: [0064]), and Alkan does teach the explanation generator including a machine learning algorithm (Alkan: [0030]), and providing why-not explanations (Alkan: [0027]). To that accord, Oreshkin does teach a method comprising: processing the image using a machine learning based model to obtain one or more features of the second item from the image; (Oreshkin: [0026] – “the continuous learning module 70 to be used in training a model used by the decision module 20 to determine the level of similarity between the data sets it receives”; Oreshkin: [0028] – “an input module 110 produces/receives one or more product signatures of the candidate for a potential copy. The input module 110 then passes these one or more product signatures (which may be images)”). wherein the prompt sent to the machine learning based language model comprises the one or more features of the second item, wherein the machine learning based language model generates the response comprising the explanation of why the second item has the replacement score below the threshold value based on information including the one or more features of the second item. (Oreshkin: [0026’ – “the continuous learning module 70 to be used in training a model used by the decision module 20 to determine the level of similarity between the data sets it receives”; Oreshkin: [0040] – “the data sent to the user can include not just images and/or product signatures of the candidate and of the target but also explanations of the system's conclusion regarding the similarity between the candidate and the target”).”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Pawar, Alkan, and Glesinger disclosing the system for scoring replacement items for unavailable items in an order with the determine the explanation with a machine learning model using features obtained from the image as taught by Oreshkin. One of ordinary skill in the art would have been motivated to do so in order to determine a level of similarity of the product signatures (Oreshkin: [0004]). Regarding Claim 10: Claim 10 recites substantially similar limitations as claim 2. Therefore, claim 10 is rejected under the same rationale as claim 2 above. Regarding Claim 11: Claim 11 recites substantially similar limitations as claim 3. Therefore, claim 11 is rejected under the same rationale as claim 3 above. Regarding Claim 12: Claim 12 recites substantially similar limitations as claim 4. Therefore, claim 12 is rejected under the same rationale as claim 4 above. Regarding Claim 13: Claim 13 recites substantially similar limitations as claim 5. Therefore, claim 13 is rejected under the same rationale as claim 5 above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY J KANG whose telephone number is (571)272-8069. The examiner can normally be reached Monday - Friday: 7:30 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Maria-Teresa Thein can be reached at 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.J.K./Examiner, Art Unit 3689 /VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 2/12/2026
Read full office action

Prosecution Timeline

Apr 23, 2024
Application Filed
Sep 10, 2025
Non-Final Rejection — §101, §103
Dec 03, 2025
Interview Requested
Dec 10, 2025
Applicant Interview (Telephonic)
Dec 10, 2025
Examiner Interview Summary
Dec 12, 2025
Response Filed
Feb 09, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597058
IDENTIFICATION OF ITEMS IN AN IMAGE AND RECOMMENDATION OF SIMILAR ENTERPRISE PRODUCTS
2y 5m to grant Granted Apr 07, 2026
Patent 12541791
Qualitative commodity matching
2y 5m to grant Granted Feb 03, 2026
Patent 12468775
Assistance Method for Assisting in Provision of EC Abroad, and Program or Assistance Server For Assistance Method
2y 5m to grant Granted Nov 11, 2025
Patent 12469070
ITEM LEVEL DATA DETERMINATION DEVICE, METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIA
2y 5m to grant Granted Nov 11, 2025
Patent 12456141
DEVICE AND METHOD FOR SELLING INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
46%
Grant Probability
72%
With Interview (+26.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 280 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month