Prosecution Insights
Last updated: April 19, 2026
Application No. 19/193,443

Cross-List Learning to Rank

Non-Final OA §103§DP
Filed
Apr 29, 2025
Examiner
MOBIN, HASANUL
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
506 granted / 675 resolved
+20.0% vs TC avg
Strong +39% interview lift
Without
With
+39.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
16 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 675 resolved cases

Office Action

§103 §DP
DETAILED ACTION Remarks The instant application having Application Number 19/193,443 filed on April 29, 2025 has a total of 20 claims pending in the application; there are 2 independent claims and 18 dependent claims, all of which are presented for examination by the examiner. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The examiner requests, in response to this Office action, support are shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c). Information Disclosure Statement As required by M.P.E.P. 609(C), the applicant’s submissions of the Information Disclosure Statements dated October 2, 2025 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C (2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action. Drawings The applicant’s drawings submitted are acceptable for examination purposes. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim 21 rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1 of US Patent No. 9,767,187. Although the conflicting claims are not identical, they are not patentably distinct from each other as shown in the table below: Patent No. 12,314,275 Instant Application No. 19/193,443 1. A computer-implemented method to perform cross-list learning to rank, the method comprising: obtaining, by a computing system comprising one or more computing devices, a first training example and a second, different training example, the first training example comprising a first plurality of items and a first query, and the second training example comprising a second plurality of items and a second query; processing, by the computing system, a first item from the first plurality of items with a ranking model to generate a first intermediate representation for the first item; processing, by the computing system, a second item from the second plurality of items with the ranking model to generate a second intermediate representation for the second item; determining, by the computing system, a correlation score between the first query and the second query; evaluating, by the computing system, a weighted pairwise ranking loss based on the first intermediate representation, the second intermediate representation, the correlation score, a first label associated with the first item, and a second label associated with the second item; and modifying, by the computing system, the ranking model based on the weighted pairwise ranking loss. 21. (New) A computer-implemented method to perform cross-list learning to rank, the method comprising: obtaining, by a computing system comprising one or more computing devices, a cluster of training examples comprising a first training example and a second, different training example, wherein the first training example comprises a first plurality of items and a first query, wherein the second training example comprises a second plurality of items and a second query, and wherein the cluster of training examples were clustered based on a correlation score between the first query and the second query; processing, by the computing system, a first item from the first plurality of items with a ranking model to generate a first intermediate representation for the first item; processing, by the computing system, a second item from the second plurality of items with the ranking model to generate a second intermediate representation for the second item; evaluating, by the computing system, a pairwise ranking loss based on the first intermediate representation and the second intermediate representation; and modifying, by the computing system, the ranking model based on the pairwise ranking loss. Table 1 As exemplarily illustrated in Table 1 above, both are directed to cross-list learning to rank; see claim language of both for detail. It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made to modify or to omit the additional elements of claims 1-20 of patent 12,314,275 to arrive at the claims 21-40 of the instant application 19/193,443 because the person would have realized that the remaining element would perform the same functions as before. It has been held that omission of an element and its function in a combination where the remaining elements perform the same function as before involves only routine skill in the art. See In re Karlson (CCPA), 136 USPQ 184, decide Jan 16, 1963, Appl. No. 6857, U. S. Court of Customs and Patent Appeals. Please also see MPEP § 804. Independent claim 35 is substantially encompass the method recited in claim 21 and are also being rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1 of US Patent No. 12,314,275. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 21, 25-33, 35-36 and 39-40 are rejected under 35 U.S.C. 103 as being unpatentable over Hunt et al. (US Patent Publication No. 2022/0374483 A1, ‘Hunt’, hereafter) in view of Liu et al. (US Patent Publication No. 20230244727 A1, ‘Liu’, hereafter) and further in view of Renders et al. (US Patent Publication No. 2021/0383254 A1, ‘Renders’, hereafter). Regarding claim 21. Hunt teaches a computer-implemented method to perform cross-list learning to rank, the method comprising: obtaining, by a computing system comprising one or more computing devices, a cluster of training examples comprising a first training example and a second, different training example (As shown in FIG. 5 in step S505 a set of candidate content items is received. In step S510 a trained ranking model is selected. In step S515 the set of candidate content items are iterated. For example, steps S520 to S530 can be iterated until all of the candidate content items are tested using the trained ranking model. The iteration including selecting a first candidate content item from the set of candidate content items (i.e., selecting first training example from a cluster of training examples), the first candidate content item having a vector representation, selecting a second candidate content item from the set of candidate content items (i.e., selecting second training example from a cluster of training examples), the first candidate content item having a vector representation, generating, using the trained ranking model, a first score based on a user feature and the first vector representation, and generating, using the trained ranking model, a second score based on the user feature and the second vector representation, and continuing the iteration over the set of candidate content items using the first candidate content item or the second candidate content item with a highest score between the first score and the second score (i.e., second training example is different from the first training example), Hunt [0006], [0070], [0082-0088] and Fig. 5), wherein the first training example comprises a first plurality of items and a first query (The iteration including selecting a first candidate content item from the set of candidate content items (i.e., selecting first training example from a cluster of training examples), the first candidate content item having a vector representation, selecting a second candidate content item from the set of candidate content items (i.e., selecting second training example from a cluster of training examples), Hunt [0006], [0070], [0082-0088] and Fig. 5), wherein the second training example comprises a second plurality of items and a second query (The iteration including selecting a first candidate content item from the set of candidate content items (i.e., selecting first training example from a cluster of training examples), the first candidate content item having a vector representation, selecting a second candidate content item from the set of candidate content items (i.e., selecting second training example from a cluster of training examples), ), the second candidate content item having a vector representation, Hunt [0006], [0070], [0082-0088] and Fig. 5), and wherein the cluster of training examples were clustered based on a correlation score between the first query and the second query (a content item relevance can be relevant to a user and/or personalized for the user. Unlike some other content item consumption routes, users can receive push notifications without actively interacting with an application. Therefore, there can be limited context (e.g., in a search engine if several users search for the same keywords, the responses could be grouped together), Hunt [0006], [0035], [0037], [0070]. The loss can be over pseudo-candidate sets, such that different ranking models can be trained for different user types. A pseudo-candidate set can include using a batch of candidate content items and grouping the candidate content items by user type (i.e., training examples were clustered based on a correlation score), Hunt [0029]); and modifying, by the computing system, the ranking model based on the weighted pairwise ranking loss (receiving a set of candidate content items, training a plurality of ranking models using a loss calculated based on weightings of pairwise rankings of pairs of training content items selected from a set of training content items, Hunt [0006]. The ranking model can be trained for distinguishing between features of the content items and identifying relationships between features and a user (e.g., the user that a push notification may be sent to). The ranking model can have an associated weight(s). … a labeled input content items set (e.g., documents with labels indicating a ranking order of the documents and/or the highest ranked document) and the predicted ranking can be compared. A loss can be generated based on the difference between the labeled ranking and the predicted ranking (i.e., weighted pairwise ranking loss). Training iterations can continue until the loss is minimized and/or until loss does not change significantly from iteration to iteration. In an example implementation, the lower the loss, the better the predicted ranking (i.e., modifying the ranking model based on the weighted pairwise ranking loss), Hunt [0006], [0030], [0032-0033], [0091-0092]). Hunt does not teach processing, by the computing system, a first item from the first plurality of items with a ranking model to generate a first intermediate representation for the first item; processing, by the computing system, a second item from the second plurality of items with the ranking model to generate a second intermediate representation for the second item; However, Liu teaches processing, by the computing system, a first item from the first plurality of items with a ranking model to generate a first intermediate representation for the first item (The hybrid labeling procedure can assign a first set of labels to training event samples that are determined to have positive engagement. The first set of labels can enable the personalized ranking model to learn or understand individual user preferences for each user. In some cases, the first set of labels can be explicit values that are assigned based on engagement activity types (i.e., a first item from the first plurality of items with a ranking model to generate a first intermediate representation for the first item), Liu [0085]); processing, by the computing system, a second item from the second plurality of items with the ranking model to generate a second intermediate representation for the second item (The hybrid labeling procedure can assign a second set of labels to training event samples that are determined to have negative engagement. The second set of labels can determine based on aggregated engagement information for the items across global users. The second set of labels can enable the personalized ranking model to learn or understand global item popularity for items. In some embodiments, the aggregated engagement information can be derived from ranking features (i.e., a second item from the second plurality of items with the ranking model to generate a second intermediate representation for the second item), Liu [0089]); Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Hunt and Liu before him/her, to modify Hunt with the teaching of Liu’s systems and methods for improving search results personalization and contextualization using machine learning models. One would have been motivated to do so for the benefit of providing relevancy of the search results presented to users is greater in comparison to other techniques, users save time and effort with respect to identifying desired items in the search results because the most relevant items appear near the top or beginning of the search result listings saving user’s time from scrolling through the search results, or navigate through several interfaces to identify the most relevant or desired items, therefore improving user experiences, greater customer retention, and higher conversion rates (Liu, Abstract and [0063]). Hunt and Liu do not teach evaluating, by the computing system, a pairwise ranking loss based on the first intermediate representation and the second intermediate representation; However, Renders teaches evaluating, by the computing system, a pairwise ranking loss based on the first intermediate representation and the second intermediate representation (a difference between the second relevance score ƒ(u, i; θ) and the weighted first relevance score, wherein the computed probability corresponds to a pairwise relevance probability of having the item i preferred to the item j by the entity u if g(u, i, j; θ.sub.g)=1, to a pointwise relevance probability defining how relevant the item i is to the entity u if g(u, i, j; θ.sub.g)=0, and defines the continuum between pointwise ranking and pairwise ranking of items if 0<g(u, i, j; θ.sub.g)<1; and (a7) learning optimized values of the first and second sets of learnable parameters θ and θ.sub.g by optimizing the loss function, depending on θ and θ.sub.g, through gradient descent optimization, the loss function being defined as a sum over all triplets <u, i, j> of a function derived from the probability of having the item i preferred to the item j by the entity u, Renders [0161]); Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Hunt, Liu and Renders before him/her, to further modify Hunt with the teaching of Renders’ adaptive pointwise-pairwise learning to rank. One would have been motivated to do so for the benefit of providing an adaptive pointwise-pairwise learning-to-rank method that is not mutually exclusive, but combines the pointwise and pairwise approaches for the same task and dataset, and learn from the data, to combine the pointwise and pairwise approaches optimally and adaptively (Renders, Abstract and [0021]). Regarding claim 25. Hunt as modified teaches, wherein evaluating, by the computing system, the pairwise ranking loss based on the first intermediate representation, the second intermediate representation (Renders [0161]) comprises: evaluating, by the computing system, an identity function term included in the weighted pairwise ranking loss, wherein the identity function term equals zero when the correlation score is less than a threshold score and equals one when the correlation score is greater than the threshold score (Hunt [0008], [0093]). Regarding claim 26. Hunt as modified teaches, wherein evaluating, by the computing system, the pairwise ranking loss based on the first intermediate representation, the second intermediate representation (Renders [0161]) comprises: weighting, by the computing system, the pairwise ranking loss by an absolute value of the correlation score (Hunt [0005-0006], [0029], [0032]). Regarding claim 27. The computer-implemented method of claim 1, wherein evaluating, by the computing system, the pairwise ranking loss based on the first intermediate representation and the second intermediate representation (Renders [0161]) comprises: normalizing, by the computing system, the pairwise ranking loss based on a total weight associated with the first item and the second item (Hunt [0035], [0086]). Regarding claim 28. Hunt as modified teaches, further comprising: selecting, by the computing system, for evaluation with the weighted pairwise ranking loss, the first training example and the second training example from a batch of training examples based on query similarity (Hunt [0022-0023], [0059]). Regarding claim 29. Hunt as modified teaches, wherein: the first training example and the second training example are included in a training dataset comprising a plurality of training examples (Hunt [0022-0023], [0059]); and the method comprises clustering the plurality of training examples into a plurality of training batches based on query similarity, whereby the first training example and the second training example are placed into a shared training batch for evaluation with the weighted pairwise ranking loss (Hunt [0006], [0070]). Regarding claim 30. Hunt as modified teaches, further comprising: determining, by the computing system, the correlation score between the first query and the second query, wherein determining the correlation score comprises processing, by the computing system, one or both of the first query and the second query with an attention network to determine the correlation score between the first query and the second query, wherein the attention network has been trained to predict a query embedding for one query from other queries included in a training batch (Hunt [0006], [0029], [0035], [0037], [0070], [0082-0088] and Fig. 5). Regarding claim 31. Hunt as modified teaches, wherein: evaluating, by the computing system, the pairwise ranking loss based on the first intermediate representation and the second intermediate representation comprises evaluating, by the computing system, a pairwise ranking loss based on the first intermediate representation, the second intermediate representation, a first positive label associated with the first item, and a second negative label associated with the second item (Renders [0061-0062], [0163]); and wherein the pairwise ranking loss seeks to minimize a probability that the second item receives a prediction of a positive label which is larger than the first item (Renders [0161]). Regarding claim 32. Hunt as modified teaches, wherein evaluating, by the computing system, the pairwise ranking loss and modifying, by the computing system, the ranking model are performed in a reinforcement learning with human feedback approach (Hunt [0004-0007], [0036]). Regarding claim 33. Hunt as modified teaches, wherein: the ranking model is used to train a generative model (Hunt [0004-0007], [0036]). Regarding claim 35. Hunt teaches one or more non-transitory computer-readable media that store computer-readable instructions that, when executed by a computing system, cause the computing system to perform operations (a system, a non-transitory computer-readable medium (having stored thereon computer executable program code which can be executed on a computer system), and/or a method can perform a process with a method including receiving a set of candidate content items, training a plurality of ranking models using a loss calculated based on weightings of pairwise rankings of pairs of training content items selected from a set of training content items, Hunt [0006], [0117]), the operations comprising: although claim 11 directed to a media, it is similar in scope to claim 1. The method steps of claim 1 substantially encompass the media recited in claim 11. Therefore; claim 11 is rejected for at least the same reason as claim 1 above. Regarding claim 36. Hunt as modified teaches, wherein the first attribute comprises a first feature of the first item and the second attribute comprises a second feature of the second item (Hunt [0006], [0070]). Regarding claim 39. Hunt as modified teaches, wherein the first attribute comprises first user feature associated with the first query and the second attribute comprises a second user feature associated with the second query (Renders [0085], [0161]). Regarding claim 40. Hunt as modified teaches, wherein: the ranking model is used to train a generative model; and evaluating, by the computing system, the pairwise ranking loss and modifying, by the computing system, the ranking model are performed in a reinforcement learning with human feedback approach (Hunt [0004-0007], [0036]). Claims 22 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Hunt et al. (US Patent Publication No. 2022/0374483 A1, ‘Hunt’, hereafter) in view of Liu et al. (US Patent Publication No. 20230244727 A1, ‘Liu’, hereafter) in view of Renders et al. (US Patent Publication No. 2021/0383254 A1, ‘Renders’, hereafter) and further in view of Kataria et al. (US Patent Publication No. 20230244727 A1, ‘Kataria’, hereafter). Regarding claim 22. Hunt, Liu and Renders do not teach, determining, by the computing system, the correlation score between the first query and the second query comprises: generating, by the computing system, a first query embedding for the first query; generating, by the computing system, a second query embedding for the second query; and evaluating, by the computing system, a similarity metric between the first query embedding and the second query embedding to generate the correlation score. However, Kataria teaches determining, by the computing system, the correlation score between the first query and the second query comprises: generating, by the computing system, a first query embedding for the first query (Kataria [0038], [0044]); generating, by the computing system, a second query embedding for the second query (Kataria [0038], [0044]); and evaluating, by the computing system, a similarity metric between the first query embedding and the second query embedding to generate the correlation score (Kataria [0018], [0038]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Hunt, Liu, Renders and Kataria before him/her, to further modify Hunt with the teaching of Kataria’s semantic clustering based retrieval for candidate set expansion. One would have been motivated to do so for the benefit of providing semantic similarity in a machine-learned job posting result ranking model to solve technical problems such as cross-language retrieval and ranking and retrieval degradation due to query preprocessing errors (Kataria, Abstract and [0001]). Regarding claim 23. Hunt as modified teaches, wherein: generating, by the computing system, the first query embedding for the first query comprises processing the first query with one or more query layers of the ranking model (Kataria [0038], [0044]); generating, by the computing system, the second query embedding for the second query comprises processing the second query with the one or more query layers of the ranking model (Kataria [0038], [0044]); and the method further comprises determining, by the computing system, a first logit score for the first item and the first query as a dot product of the first intermediate representation for the first item and the first query embedding for the first query (Renders [0016]). Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Hunt et al. (US Patent Publication No. 2022/0374483 A1, ‘Hunt’, hereafter) in view of Liu et al. (US Patent Publication No. 20230244727 A1, ‘Liu’, hereafter) in view of Renders et al. (US Patent Publication No. 2021/0383254 A1, ‘Renders’, hereafter) and further in view of Fang et al. (US Patent Publication No. 2019/0251612 A1, ‘Fang’, hereafter). Regarding claim 24. Hunt, Liu, and Renders do not teach, wherein evaluating, by the computing system, the pairwise ranking loss based on the first intermediate representation and the second intermediate representation comprises: scaling down, by the computing system, the pairwise ranking loss using a scaling factor in response to the second query being different from the first query. However, Fang teaches wherein evaluating, by the computing system, the pairwise ranking loss based on the first intermediate representation and the second intermediate representation comprises: scaling down, by the computing system, the pairwise ranking loss using a scaling factor in response to the second query being different from the first query (down scaling, Fang [0069]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Hunt, Liu. Renders and Fang before him/her, to further modify Hunt with the teaching of Fang’s generating user-customized items using a visually-aware image generation network. One would have been motivated to do so for the benefit of determining latent item features for a user using user-based triplets and a personalized ranking model that determines latent user features for the user, which the personalized fashion generation system jointly trains to produce the personalized preference network, which outputs preference prediction scores per user for each inputted item. Alternatively, the personalized fashion generation system employs a pre-trained personalized preference network (Fang, Abstract and [0027]). Claim 34 is rejected under 35 U.S.C. 103 as being unpatentable over Hunt et al. (US Patent Publication No. 2022/0374483 A1, ‘Hunt’, hereafter) in view of Liu et al. (US Patent Publication No. 20230244727 A1, ‘Liu’, hereafter) in view of Renders et al. (US Patent Publication No. 2021/0383254 A1, ‘Renders’, hereafter) and further in view of Mace et al. (US Patent Publication No. 2024/0070270 A1, ‘Mace’, hereafter). Regarding claim 34. Hunt, Liu, and Renders do not teach, wherein: the ranking model is used to train a large language model. However, Mace teaches wherein: the ranking model is used to train a large language model (Mace [0028], [0082]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Hunt, Liu. Renders and Mace before him/her, to further modify Hunt with the teaching of Mace’s generating security language queries. One would have been motivated to do so for the benefit of a base large language model; a pooling layer; a ranking head; and a classification head. The ranking head may be trained to select the user security hunting query and corresponding ground truth security language query. The classification head may be trained to generate the query metadata (Mace, Abstract, [0156]). Claim 37 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Hunt et al. (US Patent Publication No. 2022/0374483 A1, ‘Hunt’, hereafter) in view of Liu et al. (US Patent Publication No. 20230244727 A1, ‘Liu’, hereafter) and further in view of Renders et al. (US Patent Publication No. 2021/0383254 A1, ‘Renders’, hereafter) and further in view of Thimmaiah et al. (US Patent Publication No. 2020/0160373 A1, ‘Thimmaiah’, hereafter). Regarding claim 37. Hunt, Liu, and Renders do not teach wherein: the first item comprises a first content item and the first attribute comprises a first identity associated with a publisher of the first content item; the second item comprises a second content item and the second attribute comprises a second identity associated with a publisher of the second content item. However, Thimmaiah teaches wherein: the first item comprises a first content item and the first attribute comprises a first identity associated with a publisher of the first content item; the second item comprises a second content item and the second attribute comprises a second identity associated with a publisher of the second content item (Thimmaiah [0047-0048], [0060]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Hunt, Liu. Renders and Thimmaiah before him/her, to further modify Hunt with the teaching of Thimmaiah’s optimizing and predicting campaign attributes. One would have been motivated to do so for the benefit of advantageously providing automatically optimize sponsored content campaigns for a sponsored content provider for a particular consumption category across different content publisher networks by obtaining and utilizing performance data from other campaigns promoting goods and/or services for the same consumption category. Accordingly, additional efficient and cost conscience interactions can be accomplished (Thimmaiah, Abstract and [0039]). Regarding claim 38. Hunt as modified teaches wherein: the first item comprises a first content item and the first attribute comprises a first genre associated with the first content item; the second item comprises a second content item and the second attribute comprises a second genre associated with the second content item (Thimmaiah [0047-0048], [0060]). Conclusion The prior art made of record, listed on form PTO-892, and not relied upon, if any, is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASANUL MOBIN whose telephone number is (571)270-1289. The examiner can normally be reached on 9:30AM to 6:00PM EST M-F. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HASANUL MOBIN/ Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Apr 29, 2025
Application Filed
Mar 07, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602398
SYNCHRONIZING STATE IN LARGE-SCALE DISTRIBUTION SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602390
DATA ANALYSIS SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12591542
DIRECTORY METADATA OPERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12585668
EFFICIENT STATE SYNCHRONIZATION IN A CLUSTERED ENVIRONMENT USING COMPACTED KEY/TUPLE REPRESENTATIONS AND SNAPSHOT-BASED STATE RESTORATION
2y 5m to grant Granted Mar 24, 2026
Patent 12572504
DATA ORGANIZER OPTIMIZING RECONCILIATION SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+39.0%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 675 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month