Prosecution Insights
Last updated: April 18, 2026
Application No. 17/927,398

MACHINE LEARNING RANK AND PREDICTION CALIBRATION

Non-Final OA §101§103
Filed
Nov 23, 2022
Examiner
CHOI, YUK TING
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
466 granted / 652 resolved
+16.5% vs TC avg
Strong +37% interview lift
Without
With
+37.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§101 §103
DETAILED ACTION 1. This office action is in response to applicant’s communication filed on 11/06/2025 in response to PTO Office Action mailed on 08/06/2025. The Applicant’s remarks and amendments to the claims and/or the specification were considered with the results as follows. 2. In response to the last Office Action, claims 1, 6, 7, 11-14 and 21 are amended. Claims 5, 15 and 19 are canceled. As a result, claims 1-4, 6-14, 16-18, 20 and 21 are pending in this office action. Response to Arguments 3. Applicant's arguments with respect to 101 rejections have been fully considered but they are not persuasive. Applicant’s argument stated as “Claims 1-21 have been amended. The rejections under 35 USC 101 as allegedly reciting non-patentable subject matter should be withdrawn”. In response to Applicant’s argument, the Examiner disagrees because claim 1, 14 or 21, under its broadest reasonable interpretation, covers performance of the limitation in the mind for the recitation of generic computer components. The abstract idea recited in claims 1, 14 or 21 is not integrated into a practical application. Claim 1 or other independent claims recite(s) inputting a first output obtained by a first machine learning model into a second machine learning model, the second machine model is trained by a filtered set of training examples that have been provided as recommendations. The output of the second machine model is ranked and delivered to a user device. For example, claim 1 merely recites an additional element – uses a computer to provide data results after a series of data-gathering steps. The computer in all the steps is recited at a high-level generality [i.e., as a generic computer] performing a computer function of providing a subset of results to a user device such that it amounts to no more than mere instructions to apply the exception using a computer as a tool to retrieve data results after a series of data-gathering steps. Training a first machine learning model and replacing a subset of training examples with a second machine learning model are insignificant extra-solution activities. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Mere instructions to apply an exception using a generic computer cannot provide an inventive concept. The claim is not patent eligible. There is no indication that the recited features improve the functioning of a computer or improve any other technology. Hence, claims 1-4, 6–14, 16–18, 20 and 21 are ineligible under 35 USC 101. 4. Applicant's arguments with respect to 102 rejections have been fully considered but are moot in view of new ground(s) of rejection. As mentioned in the previous office action, the Rosset reference discloses the second machine learning model is trained on training examples that include features of a set of co-recommended digital components that have been provided together as recommendations. The Rosset reference also discloses inputting training examples selected from a plurality groups of training examples to a second machine-trained model to identify intents associated with the respective queries. The Rosset reference also discloses the second machine-trained selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores (See para. [0006], para. [0074], para. [0132], para. [0133], Figures 4 and 14-16). The Rosset reference does not explicitly disclose a training model is trained on training examples that includes one or more features of a least a subset of the training examples has been modified by removing information co-recommended digital components. The Examiner has incorporated a newly cited reference Fuxman to teach the recited feature “a training model is trained on training examples that includes one or more features of a least a subset of the training examples has been modified by removing information co-recommended digital components” (See Fuxman, para. [0020] and para. [0119]). Therefore, it is the combination of the cited references Fuxman and Rosset discloses the amended feature. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 1-21 are rejected under 35 U.S.C 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Claim 1 is directed to the abstract idea of providing at least one digital component in a subset of digital components based on ranking, as explained in detail below. The claim does not include elements that are sufficient to amount to significantly more than the judicial exception because the elements can be concepts performed in the human mind which do not add meaningful limits to practicing the abstract idea. Claim 1 recites a method comprising at least in part: receiving a digital component request (e.g., observing a request can be performed in the human mind); providing, as input to a first machine learning model, first input data comprising feature values for features of each digital component in a set of digital components, wherein the first machine learning model is trained to output, for each digital component, a score that indicates a likelihood of a positive outcome for the digital component (e.g., observing and inputting first data including features to a first model can be performed in the human mind); processing the first input data using the first machine learning model (e.g., observing and evaluating the first input data using a first model can be performed in the human mind); receiving, as a first output of the first machine learning model, respective scores for the digital components in the set of digital components (e.g., observing and evaluating scores of the first output data using the first model can be performed in the human mind); providing, as input to a second machine learning model, second input data comprising feature values for features of each digital component in a subset of digital components selected based on the respective scores for the digital components in the set of digital components (e.g., observing and evaluating the second input data using a second model can be performed in the human mind), wherein the second machine learning model is trained to output, a ranking of digital components based at least in part on feature values of features of digital components that will be provided together as recommendations, wherein the second machine learning model is trained on training examples that include features of a set of co-recommended digital components that have been provided together as recommendations, and wherein one or more features of at least a subset of the training examples have been modified by removing information about co-recommended digital components(e.g., observing and evaluating output data [e.g., ranking scores] from the second model for digital objects can be performed in the human mind); processing the second input data using the second machine learning model (e.g., observing and the second input data using the second model can be performed in the human mind); receiving, as a second output of the second machine learning model, ranking of the digital components in the subset of digital components (e.g., observing and evaluating a second output data [e.g., ranking] from the second model for digital objects can be performed in the human mind); and providing at least one digital component in the subset of digital components based on the second ranking (e.g., recommending at least one digital component based on the second output data [e.g., ranking] can be performed in the human mind including observation, evaluation, judgement, opinion). Claim 1, as it is recited, falls within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in the human mind, including observation, evaluation, judgment, and opinion. That is, other than reciting a computer-implemented method that interacts with machine learning models, nothing in the claim precludes the step from practically being performed in the mind. Claim 1, under its broadest reasonable interpretation, recites inputting a first output obtained by a first machine learning model into a second machine learning model, the second machine model is trained by a filtered set of training examples that have been provided as recommendations. The output of the second machine model is ranked and delivered to a user device. The additional feature in claim 1 is merely using a computer as a tool to retrieve data results after a series of data-gathering steps, which is an insignificant extra-solution activity. Thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claim 1 is not patent eligible. Claims 2-4 and 8-10 recite similar features to claim 1, is also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claim 2 further defines a first machine learning model and a second machine learning model, and train the machine learning models using training examples. There are no additional features that appear to be improvements to the functioning of a computer or to any other technology or technical field. There are no additional features that amount to significantly more than the above-identified judicial exception (the abstract idea). Therefore, claims 2-4 and 8-10 are not patent eligible. Claims 6 and 7 recite similar features to claim 1, is also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 6 and 7 recite selecting training examples and a series of steps used to train the machine learning model. The features recited in claims 6 and 7 are recited at a high level of generality and adds no more to the claimed invention than a computer component that performs an abstract idea. The additional feature that merely uses a computer/device as a tool to retrieve data results after a series of data-gathering steps is an insignificant extra-solution activity. Thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claims 6 and 7 are not patent eligible. Claims 11-13 recite similar features to claim 1, is also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 11-13 further defines the output data including a third, a positive income and a recommendation. There are no additional features that appear to be improvements to the functioning of a computer or to any other technology or technical field. There are no additional features that amount to significantly more than the above-identified judicial exception (the abstract idea). Therefore, claims 11-13 are not patent eligible. Claim 14 recites a system comprising at least in part: receiving a digital component request (e.g., observing a request can be performed in the human mind); providing, as input to a first machine learning model, first input data comprising feature values for features of each digital component in a set of digital components, wherein the first machine learning model is trained to output, for each digital component, a score that indicates a likelihood of a positive outcome for the digital component (e.g., observing and inputting first data to a first model can be performed in the human mind); processing the first input data using the first machine learning model (e.g., observing and evaluating the first input data using a first model can be performed in the human mind); receiving, as a first output of the first machine learning model, respective scores for the digital components in the set of digital components (e.g., observing and evaluating scores of the first output data using the first model can be performed in the human mind); providing, as input to a second machine learning model, second input data comprising feature values for features of each digital component in a subset of digital components selected based on the respective scores for the digital components in the set of digital components (e.g., observing and evaluating the second input data using a second model can be performed in the human mind), wherein the second machine learning model is trained to output, a ranking of digital components based at least in part on feature values of features of digital components that will be provided together as recommendations, wherein the second machine learning model is trained to output, a ranking of digital components based at least in part on feature values of features of digital components that will be provided together as recommendations, wherein the second machine learning model is trained on training examples that include features of a set of co-recommended digital components that have been provided together as recommendations, and wherein one or more features of at least a subset of the training examples have been modified by removing information bout co-recommended digital components (e.g., observing and evaluating output data [e.g., ranking scores] from the second model for digital objects can be performed in the human mind); processing the second input data using the second machine learning model (e.g., observing and the second input data using the second model can be performed in the human mind); receiving, as a second output of the second machine learning model, ranking of the digital components in the subset of digital components (e.g., observing and evaluating a second output data [e.g., ranking] from the second model for digital objects can be performed in the human mind); and providing at least one digital component in the subset of digital components based on the second ranking (e.g., recommending at least one digital component based on the second output data [e.g., ranking] can be performed in the human mind including observation, evaluation, judgement, opinion). Claim 14, as it is recited, falls within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in the human mind, including observation, evaluation, judgment, and opinion. That is, other than reciting one or more computers and one or more storage devices storing instructions to interact with machine learning models, nothing in the claim precludes the step from practically being performed in the mind. Claim 14, under its broadest reasonable interpretation, recites inputting a first output obtained by a first machine learning model into a second machine learning model, the second machine model is trained by a filtered set of training examples that have been provided as recommendations. The output of the second machine model is ranked and delivered to a user device. The additional feature in claim 14 is merely using a computer as a tool to retrieve data results after a series of data-gathering steps, which is an insignificant extra-solution activity. Thus, the judicial exception is not integrated into a practical application. Thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claim 14 is not patent eligible. Claims 16-18 recite similar features to claim 14, is also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claims 16-18 further defines a first machine learning model and a second machine learning model, and train the machine learning models using training examples. There are no additional features that appear to be improvements to the functioning of a computer or to any other technology or technical field. There are no additional features that amount to significantly more than the above-identified judicial exception (the abstract idea). Therefore, claims 16-18 are not patent eligible. Claim 20 recites similar features to claim 14, is also fall within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in human mind including observation, evaluation, judgement, opinion. Claim 20 recites selecting training examples and a series of steps used to train the machine learning model. The features recited in claim 20 are recited at a high level of generality and adds no more to the claimed invention than a computer component that performs an abstract idea. The additional feature that merely uses a computer/device as a tool to retrieve data results after a series of data-gathering steps is an insignificant extra-solution activity. Thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claim 20 is not patent eligible. Claim 21 recites one or more non-transitory computer-readable storage media comprising at least in part: receiving a digital component request (e.g., observing a request can be performed in the human mind); providing, as input to a first machine learning model, first input data comprising feature values for features of each digital component in a set of digital components, wherein the first machine learning model is trained to output, for each digital component, a score that indicates a likelihood of a positive outcome for the digital component (e.g., observing and inputting first data to a first model can be performed in the human mind); processing the first input data using the first machine learning model (e.g., observing and evaluating the first input data using a first model can be performed in the human mind); receiving, as a first output of the first machine learning model, respective scores for the digital components in the set of digital components (e.g., observing and evaluating scores of the first output data using the first model can be performed in the human mind); providing, as input to a second machine learning model, second input data comprising feature values for features of each digital component in a subset of digital components selected based on the respective scores for the digital components in the set of digital components (e.g., observing and evaluating the second input data using a second model can be performed in the human mind), wherein the second machine learning model is trained to output, a ranking of digital components based at least in part on feature values of features of digital components that will be provided together as recommendations (e.g., observing and evaluating output data [e.g., ranking scores] from the second model for digital objects can be performed in the human mind); processing the second input data using the second machine learning model (e.g., observing and the second input data using the second model can be performed in the human mind); receiving, as a second output of the second machine learning model, ranking of the digital components in the subset of digital components (e.g., observing and evaluating a second output data [e.g., ranking] from the second model for digital objects can be performed in the human mind); and providing at least one digital component in the subset of digital components based on the second ranking (e.g., recommending at least one digital component based on the second output data [e.g., ranking] can be performed in the human mind including observation, evaluation, judgement, opinion). Claim 21, as it is recited, falls within one of the groupings of abstract ideas [e.g., mental process] enumerated in the 2019 PEG. The recited concept can be performed in the human mind, including observation, evaluation, judgment, and opinion. That is, other than reciting one or more computers and one or more storage devices storing instructions to interact with machine learning models, nothing in the claim precludes the step from practically being performed in the mind. Claim 21, under its broadest reasonable interpretation, recites inputting a first output obtained by a first machine learning model into a second machine learning model, the second machine model is trained by a filtered set of training examples that have been provided as recommendations. The output of the second machine model is ranked and delivered to a user device. The additional feature in claim 21 is merely using a computer as a tool to retrieve data results after a series of data-gathering steps, which is an insignificant extra-solution activity. Thus, the judicial exception is not integrated into a practical application. The additional feature does not appear to be improvements to the functioning of a computer or to any other technology or technical field. The additional feature does not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitation as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, claim 21 is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5, 6, 10, 12-14, 17, 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Rosset (US 2021/0326742 A1) and in view of Fuxman (US 2018/0210874 A1). Referring to claim 1, Rosset discloses a computer-implemented method, comprising: receiving a digital component request (See para. [0044], para. [0132] and Figures 4, 5, 14, the system receives a query for identifying a set of documents by a user computing device 104, via a computer network); providing, as input to a first machine learning model, first input data comprising feature values for features of each digital component in a set of digital components (See para. [0044], para. [0068], para. [0069] para. [0132] and Figures, 4, 5 14, the system retrieves and extracts a set of features that describe a candidate document [e.g. feature values for features] from the query, the system provides the query to a first learning model and retrieves at least one suggestion), wherein the first machine learning model is trained to output, for each digital component, a score that indicates a likelihood of a positive outcome for the digital component (See para. [0071] and para. [0148] and Figures 4-6, the first machine-trained model generates at least one suggestion includes generating an initial set of candidate suggestions based on the query, for each candidate suggestion, using the classification-type neural network to generate a ranking score that identifies an extent to which the candidate suggestion is appropriate for the query); processing the first input data using the first machine learning model; receiving, as a first output of the first machine learning model, respective scores for the digital components in the set of digital components (See para. [0044], para. [0068], para. [0069], [0071], para. [0132] and para. [0148] and Figures 4-6, the first machine-trained model generates at least one suggestion includes generating an initial set of candidate suggestions based on the first inputted query [e.g., feature values for features], for each candidate suggestion, using the classification-type neural network to generate a ranking score that identifies an extent to which the candidate suggestion is appropriate for the query and providing one or more candidate suggestion having respective top-ranked scores); providing, as input to a second machine learning model, second input data comprising feature values for features of each digital component in a subset of digital components selected based on the respective scores for the digital components in the set of digital components (See para. [0006], para. [0074], para. [0132], para. [0133], Figures 4 and 14-16, inputting training examples selected from a plurality groups of training examples, the training examples provides a sequence of queries to a second machine-trained model to identify intents associated with the respective queries, and then determining relationships among the intents, the second machine-trained is selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores), wherein the second machine learning model is trained to output, a rankings of digital components based at least in part on feature values of features of digital components that will be provided together as recommendations (See para. [0069], para. [0074], para. [0132] and Figures 4, 14, the second machine-trained model selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores), wherein the second machine learning model is trained on training examples that include features of a set of co-recommended digital components that have been provided together as recommendations (See para. [0006], para. [0074], para. [0132], para. [0133], Figures 4 and 14-16, inputting training examples selected from a plurality groups of training examples, the training examples provides a sequence of queries to a second machine-trained model to identify intents associated with the respective queries, and then determining relationships among the intents, the second machine-trained is selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores) […] processing the second input data using the second machine learning model (See para. [0006], para. [0074], para. [0132], para. [0133], Figures 4, 14-16, inputting training examples selected from a plurality groups of training examples, the training examples provides a sequence of queries to a second machine-trained model to identify intents associated with the respective queries, and then determining relationships among the intents); receiving, as a second output of the second machine learning model, ranking of the digital components in the subset of digital components; and providing at least one digital component in the subset of digital components based on the second ranking (See para. [0069], para. [0074], para. [0132] and Figures 4, 14, the second machine-trained model selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores). Roseet discloses the second machine learning model is train on training examples that include features of a set of co-recommended digital components that have been provided together as recommendations but does not explicitly disclose one or more features of a least a subset of the training examples have been modified by removing information co-recommended digital components. Fuxman discloses one or more features of a least a subset of the training examples has been modified by removing information co-recommended digital components (See para. [0020] and para. [0119]. the training data can be filtered such that the previous responses in the training data are more specific to particular content of the previous digital components than other previous responses that have been filtered). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to modify the second model of the Rosset to remove training examples information, as taught by Fuxman. Skilled artisan would have been motivated to provide improved suggestions based on updated training data and user selections (See Fuxman, para. [0026]). Both of the references (Fuxman, Rosset) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as a prediction modeling system. This close relation between both references highly suggests an expectation of success. As to claims 3 and 17, Rosset discloses wherein the second machine learning model is a different machine learning model from the first machine learning model and wherein the second machine learning model has been trained differently from the first machine learning model (See para. [0133] and Figure 15, the training examples provides a sequence of queries that have been determined to exhibit a coherent task-related intent for the second machine trained model which is different from the first machine-trained model). As to claims 6 and 20, Rosset discloses further comprising: selecting a first plurality of training examples from among the training examples that include co-recommended digital components; modifying one or more features in the first plurality of training examples, wherein modifying a feature in the one or more features comprises removing information about co-recommended items; and adding the first plurality of training examples to the training examples (See para. [0115]-para. [0117], “performing training based on the training examples of group A, the training system 1104 updates the parameter values of the head A in combination with the parameter values of the pre-trained model 1118. When performing training based on the training examples of group B, the training system 1104 updates the parameter values of the head B in combination with the parameter values of the pre-trained model 1118, and so on. Thus, the training system 1104 can be said to use all four groups of training examples to update the parameter values of the pre-trained model 1118. This has the effect of generalizing and enhancing the knowledge embodied in the parameter values of the pre-trained model 1118, when training is finished, the training system 1104 outputs a final machine-trained model 1122. The machine-trained model 1122 includes a fine-tuned base model 1124, which corresponds to a fine-tuned counterpart of the pre-trained model 1118. The machine-trained model 1122 can also include a trained head 1126. The head 1126 may correspond to the header logic that has been trained for a particular example-generating method, such as the example-generating method used by the click-based example-generating system 1112. In another implementation, the head 1126 can be built from a combination (e.g., a linear combination) of all four heads learned in the training operation”). As to claim 10, Rosset discloses wherein the second machine learning model is a neural network that includes a partial or full hidden layer that is configured to produce a third score associated with a first hidden digital component based on input associated with at least one second digital component (see para. [0077], “the query-processing system 106 can generate at least one suggestion based on the last n queries that the user has submitted and/or based on other contextual information. The SGS 112 performs its operation using a suggestion-generating component (SGC) 114. As will be set forth below, the SGC 114 may correspond to a neural network, a statistical engine, or any other type of component that operates based on a machine-trained model). As to claim 12, Rosette discloses wherein a positive outcome is indicative of a user interacting with or being likely to interact with the digital component in the subset of digital components when displayed on a device (See para. [0109], a relative-CTR example-generating system 1114 also generates training examples from the historical click log. But unlike the click-based example-generating system 1112, the relative-CTR example-generating system 1114 computes, for a given query and for a given suggestion under consideration, a rate (e.g., click through rate or CTR) at which a plurality of users clicked on the suggestion after submitting the query. The relative-CTR example-generating system 1114 then find examples in which, for the given query, the rate (ctr.sub.1) at which users selected a first suggestion (s.sub.1) exceeds a rate (ctr.sub.2) at which the user selected a second suggestion (s.sub.2) by a prescribed amount c. Note that the relative-CTR example-generating system 114 can normalize each rate by the total number of clicks that the query under consideration has received. Further, the relative-CTR example-generating system 114 can require that each suggestion be clicked at least a predetermined number of times to qualify as a valid suggestion for consideration. A pairing of the query q and the first suggestion (s.sub.1) provides a positive example, while a pairing of the query and the second suggestion (s.sub.2) constitutes a negative example). As to claim 13, Rosette discloses the recommendations are recommendations of digital components to be displayed on a device (See para. [0053] and Figure 2, presenting four suggestions on the search results page 202). Referring to claims 14 and 21, Rosette discloses a system comprising: one or more computers (See para. [0138] and Figure 18, the computing device 1802 can include computer-readable storage media 1806, corresponding to one or more computer- readable media hardware units); and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations (See para. [0132]-para. [0138] and para. [0142]) the computer device comprising: receiving a digital component request (See para. [0044], para. [0132] and Figures 4, 5, 14, the system receives a query for identifying a set of documents by a user computing device 104, via a computer network); providing, as input to a first machine learning model, first input data comprising feature values for features of each digital component in a set of digital components (See para. [0044], para. [0068], para. [0069] para. [0132] and Figures, 4, 5 14, the system retrieves and extracts a set of features that describe a candidate document [e.g. feature values for features] from the query), wherein the first machine learning model is trained to output, for each digital component, a score that indicates a likelihood of a positive outcome for the digital component (See para. [0071] and para. [0148] and Figures 4-6, the first machine-trained model generates at least one suggestion includes generating an initial set of candidate suggestions based on the query, for each candidate suggestion, using the classification-type neural network to generate a ranking score that identifies an extent to which the candidate suggestion is appropriate for the query); processing the first input data using the first machine learning model; receiving, as a first output of the first machine learning model, respective scores for the digital components in the set of digital components (See para. [0044], para. [0068], para. [0069], [0071], para. [0132] and para. [0148] and Figures 4-6, the first machine-trained model generates at least one suggestion includes generating an initial set of candidate suggestions based on the first inputted query [e.g., feature values for features], for each candidate suggestion, using the classification-type neural network to generate a ranking score that identifies an extent to which the candidate suggestion is appropriate for the query and providing one or more candidate suggestion having respective top-ranked scores); providing, as input to a second machine learning model, second input data comprising feature values for features of each digital component in a subset of digital components selected based on the respective scores for the digital components in the set of digital components (See para. [0006], para. [0074], para. [0132], para. [0133], Figures 4 and 14-16, inputting training examples selected from a plurality groups of training examples, the training examples provides a sequence of queries to a second machine-trained model to identify intents associated with the respective queries, and then determining relationships among the intents, the second machine-trained is selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores), wherein the second machine learning model is trained to output, a ranking of digital components based at least in part on feature values of features of digital components that will be provided together as recommendations (See para. [0069], para. [0074], para. [0132] and Figures 4, 14, the second machine-trained model selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores); wherein the second machine learning model is trained on training examples that include features of a set of co-recommended digital components that have been provided together as recommendations (See para. [0006], para. [0074], para. [0132], para. [0133], Figures 4 and 14-16, inputting training examples selected from a plurality groups of training examples, the training examples provides a sequence of queries to a second machine-trained model to identify intents associated with the respective queries, and then determining relationships among the intents, the second machine-trained is selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores) […] processing the second input data using the second machine learning model (See para. [0006], para. [0074], para. [0132], para. [0133], Figures 4, 14-16, inputting training examples selected from a plurality groups of training examples, the training examples provides a sequence of queries to a second machine-trained model to identify intents associated with the respective queries, and then determining relationships among the intents); receiving, as a second output of the second machine learning model, ranking of the digital components in the subset of digital components; and providing at least one digital component in the subset of digital components based on the second ranking (See para. [0069], para. [0074], para. [0132] and Figures 4, 14, the second machine-trained model selects a final set of K suggestions based on the ranking associated with candidate suggestions having the most favorable ranking scores). Fuxman discloses one or more features of a least a subset of the training examples have been modified by removing information co-recommended digital components (See para. [0020] and para. [0119]. the training data can be filtered such that the previous responses in the training data are more specific to particular content of the previous digital components than other previous responses that have been filtered). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to modify the second model of the Rosset to remove training examples information, as taught by Fuxman. Skilled artisan would have been motivated to provide improved suggestions based on updated training data and user selections (See Fuxman, para. [0026]). Both of the references (Fuxman, Rosset) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as a prediction modeling system. This close relation between both references highly suggests an expectation of success. Claims 2, 4, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Rosset (US 20210326742 A1) in view of Fuxman (US 2018/0210874 A1) and further in view of Huang (US 2018/0046924 A1). As to claims 2 and 16, Rosset does not explicitly disclose the second machine learning model is identical to the first machine learning model. Huang discloses the second machine learning model is identical to the first machine learning model (See para. [0037], the prediction models 31 and 32 are respectively identical with the prediction models 21 and 22). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to modify the second model of the Rosset to be identical to the first machine learning model, as taught by Huang. Skilled artisan would have been motivated to identify a series of data being identical with each other and to avoid redundancy (See Huang, para. [0035]). All of the references (Fuxman, Huang, Rosset) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as a prediction modeling system. This close relation between all references highly suggests an expectation of success. As to claims 4 and 18, Rosset does not explicitly disclose executing fewer instructions to process the identical input than the first machine learning model. Huang discloses wherein the second machine learning model, when processing identical input as the first machine learning model, executes fewer instructions to process the identical input than the first machine learning model (See para. [0042], if prediction models 31 and 32 are respectively identical with the predictions models 21 and 22, the prediction model 33 performs the prediction by combining the historical photovoltaic power). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to modify the second model of the Rosset to be identical to the first machine learning model, as taught by Huang. Skilled artisan would have been motivated to identify a series of data being identical with each other and to integrate information for less calculation (See Huang, para. [0035], para. [0037]). All of the references (Fuxman, Huang, Rosset) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as a prediction modeling system. This close relation between all references highly suggests an expectation of success. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Rosset (US 2021/0326742 A1) and in view of Fuxman (US 2018/0210874 A1) and further in view of Anderson (US 2019/0012719 A1). As to claim 7, Rosset discloses wherein training the first machine learning model produces a gradient, […] (See para. [0113], training the first machine-trained model using stochastic gradient descent), wherein the digital component embedding represent features of the co-recommend digital components (See para. [0071] and para. [0148] and Figures 4-6, the first machine-trained model generates at least one suggestion includes generating an initial set of candidate suggestions based on the query, for each candidate suggestion, using the classification-type neural network to generate a ranking score that identifies an extent to which the candidate suggestion is appropriate for the query). Rosset does not explicitly disclose produces a gradient, the method further comprising propagating a gradient to a plurality of digital component embeddings. Anderson discloses produces a gradient, the method further comprising propagating a gradient to a plurality of digital component embeddings (See para. [0025], the recommendation engines stores embedding vectors or the WALs embedding, but may include any vector similarity model, such as a vector similarity model derived from a stochastic gradient descent or back propagation solution). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to modify the system of Rosset to be identical to the first machine learning model, as taught by Anderson. Skilled artisan would have been motivated to use available techniques to detect item similarity (See Anderson, para. [0025]). All of the references (Fuxman, Anderson and Rosset) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as a prediction modeling system. This close relation between all references highly suggests an expectation of success. Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Rosset (US 2021/0326742 A1) in view of Fuxman (US 2018/0210874 A1) and further in view of Shim (US 2021/0383158 A1). As to claim 8, Rosset discloses the first machine learning model processes an input and the second machine learning model processes an input that includes a plurality of digital component embeddings (See figure 9, para. [0091] and para. [0094]). Rosset does not explicitly disclose marginalized embeddings wherein the machine learning model processes an input that includes marginalized embeddings that represent a marginal contribution of a first feature over a contribution of a second feature. Shim discloses marginalized embeddings wherein the machine learning model processes an input that includes marginalized embeddings that represent a marginal contribution of a first feature over a contribution of a second feature (See para. [0092], para. [0093] and Figure 4, shows that some samples may be more important than others in terms of preserving what the neural network has learned. For example, data from one class that are near the boundary with data from another class in some sense act as sentinels to guard the decision boundaries between classes, where α.sub.k(S) is the index of the kth closest sample (from x.sup.ev.sub.j) in S based on some distance metric. Each sample I is assigned a KNN-SV—s.sub.j(i)—that represents the average marginal contribution of the instance to the utility. Due to the additivity of SV, the KNN-SV of a candidate with respect to the evaluation set (D.sub.e={(x.sub.j.sup.ev,y.sub.j.sup.ev)}.sub.j=1.sup.N.sup.e) by taking the average). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to modify the system of Rosset to include marginalized embeddings, as taught by Shim. Skilled artisan would have been to measure the average marginal contribution to determine training instance
Read full office action

Prosecution Timeline

Nov 23, 2022
Application Filed
Aug 04, 2025
Non-Final Rejection — §101, §103
Oct 27, 2025
Interview Requested
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Examiner Interview Summary
Nov 06, 2025
Response Filed
Dec 01, 2025
Final Rejection — §101, §103
Feb 03, 2026
Response after Non-Final Action
Feb 12, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Apr 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591610
SYSTEMS AND METHODS FOR REMOVING NON-CONFORMING WEB TEXT
2y 5m to grant Granted Mar 31, 2026
Patent 12579156
SYSTEMS AND METHODS FOR VISUALIZING ONE OR MORE DATASETS
2y 5m to grant Granted Mar 17, 2026
Patent 12562753
SYSTEM AND METHOD FOR MULTI-TYPE DATA COMPRESSION OR DECOMPRESSION WITH A VIRTUAL MANAGEMENT LAYER
2y 5m to grant Granted Feb 24, 2026
Patent 12536282
METHODS AND APPARATUS FOR MACHINE LEARNING BASED MALWARE DETECTION AND VISUALIZATION WITH RAW BYTES
2y 5m to grant Granted Jan 27, 2026
Patent 12511258
DYNAMIC STORAGE OF SEQUENCING DATA FILES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+37.4%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month