Prosecution Insights
Last updated: April 19, 2026
Application No. 19/078,662

ENTITY-AWARE MULTI-TASK MACHINE LEARNING

Non-Final OA §101§103
Filed
Mar 13, 2025
Examiner
RICHARDSON, JAMES E
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Walmart Apollo LLC
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
410 granted / 506 resolved
+26.0% vs TC avg
Strong +32% interview lift
Without
With
+31.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
14 currently pending
Career history
520
Total Applications
across all art units

Statute-Specific Performance

§101
17.5%
-22.5% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 506 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in this application. Information Disclosure Statement The information disclosure statement filed 03/13/2025 fails to fully comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been fully considered. Specifically, no legible copy of foreign reference CN 112749561 A has been provided. The corresponding document filed does not include a complete copy, and furthermore, the text that is provided is not legible. Additionally, the included English translation attached thereto is also not legible and as such does not constitute a concise explanation of the relevance. As such, the document has not been considered. All other references cited in the IDS have been considered. The information disclosure statement (IDS) submitted on 10/02/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. It is noted that NPL cite 1 did not include a place of publication. The IDS reference has been annotated by Examiner to include it and been considered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 6-9, 14, 15, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of mental processes without significantly more. As to claim 1, the claim recites the mental processes to: obtain a search query (A person can think of or read a query to obtain it), retrieve, using an entity retrieval model, at least one entity based on the search query (An entity retrieval model is recited at a high level of generality to merely be a though process of a person used to identify entities within or by using their knowledge of information in the obtained search query.), generate query embedding data based on the search query and the at least one entity (The process of generating is recited at a high level of generality such that a person can mentally generate simple embedding data based on the query and entities as they see fit.), generate, using a plurality of task-specific networks, task prediction data for a plurality of tasks based on the query embedding data, wherein each task of the plurality of tasks captures a different aspect of a user intent associated with the search query (The task-specific networks are recited at a high level of generality. As such, each can be seen as a different mental process for determining if the query embedding corresponds to a certain type of intent.), and generate at least one search result for the search query based on the task prediction data for the plurality of tasks (A person can mentally read through data associated with mentally determined intents, and determine if a query matches to mentally generate results.). This judicial exception is not integrated into a practical application because the features “a system, comprising: a processor; and a non-transitory memory storing instructions, that when executed, cause the processor” to perform the steps, merely recites generic computer components intended to merely implement the abstract idea on a computer. Likewise, the recited “neural network” used to generate embedding data is recited at a high level of generality to also be a generic computer component intended to merely implement the abstract idea on a computer or merely in the field of artificial intelligence/machine-learning. See MPEP §2106.05(f) and §2106.05(h). The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, again, the additional elements of “a system, comprising: a processor; and a non-transitory memory storing instructions, that when executed, cause the processor” to perform the steps, and the neural network, are generic computer components intended to merely implement the abstract idea on a computer or merely in the field of artificial intelligence/machine-learning. See MPEP §2106.05(f) and §2106.05(h). As to claim 9, the claim recites the mental processes of a method, comprising: obtaining a search query (A person can think of or read a query to obtain it); retrieving, using an entity retrieval model, at least one entity based on the search query (An entity retrieval model is recited at a high level of generality to merely be a though process of a person used to identify entities within or by using their knowledge of information in the obtained search query.); generating query embedding data based on the search query and the at least one entity (The process of generating is recited at a high level of generality such that a person can mentally generate simple embedding data based on the query and entities as they see fit.); generating, using a plurality of task-specific networks, task prediction data for a plurality of tasks based on the query embedding data, wherein each task of the plurality of tasks captures a different aspect of a user intent associated with the search query (The task-specific networks are recited at a high level of generality. As such, each can be seen as a different mental process for determining if the query embedding corresponds to a certain type of intent.); and generating at least one search result for the search query based on the task prediction data for the plurality of tasks (A person can mentally read through data associated with mentally determined intents, and determine if a query matches to mentally generate results.). This judicial exception is not integrated into a practical application because the features of implementing the method as “computer-implemented” merely attempts to apply the abstract idea by a computer (see MPEP §2106.05(f)) and “using a neural network” to generate embedding data is recited at a high level of generality to also be a generic computer component intended to merely implement the abstract idea on a computer or merely in the field of artificial intelligence/machine-learning. See MPEP §2106.05(f) and §2106.05(h). The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, again, the additional elements the neural network, are generic computer components intended to merely implement the abstract idea on a computer or merely in the field of artificial intelligence/machine-learning. See MPEP §2106.05(f) and §2106.05(h). As to claim 15, the claim recites the mental processes comprising: obtaining a search query (A person can think of or read a query to obtain it); retrieving, using an entity retrieval model, at least one entity based on the search query (An entity retrieval model is recited at a high level of generality to merely be a though process of a person used to identify entities within or by using their knowledge of information in the obtained search query.); generating query embedding data based on the search query and the at least one entity (The process of generating is recited at a high level of generality such that a person can mentally generate simple embedding data based on the query and entities as they see fit.); generating, using a plurality of task-specific networks, task prediction data for a plurality of tasks based on the query embedding data, wherein each task of the plurality of tasks captures a different aspect of a user intent associated with the search query (The task-specific networks are recited at a high level of generality. As such, each can be seen as a different mental process for determining if the query embedding corresponds to a certain type of intent.); and generating at least one search result for the search query based on the task prediction data for the plurality of tasks (A person can mentally read through data associated with mentally determined intents, and determine if a query matches to mentally generate results.). This judicial exception is not integrated into a practical application because the features “a non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, cause at least one device to perform” to perform the operations, merely recites generic computer components intended to merely implement the abstract idea on a computer. Likewise, the recited “neural network” used to generate embedding data is recited at a high level of generality to also be a generic computer component intended to merely implement the abstract idea on a computer or merely in the field of artificial intelligence/machine-learning. See MPEP §2106.05(f) and §2106.05(h). The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, again, the additional elements of “non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, cause at least one device to perform” to perform the operations, and the neural network, are generic computer components intended to merely implement the abstract idea on a computer or merely in the field of artificial intelligence/machine-learning. See MPEP §2106.05(f) and §2106.05(h). As to claims 6, 14 , and 20, the claims are rejected for the same reasons as claims 1, 9, and 15 above. In addition, the claims recite the mental processes of wherein during a training stage of the neural network and the plurality of task-specific networks: only one of the plurality of task-specific networks is activated during each training step of a plurality of training steps of the training stage (A person decide to only use one mentally ‘task-specific network’ to determine intents.); and This judicial exception is not integrated into a practical application because the features “the neural network is activated and trained during all training steps of the training stage” merely recites what neural-networks are designed to do during training, which is training based on all input received. The claims furthermore do not recite the use of this to achieve any application, let alone a practical application. The claim(s) does/do not include additional elements “the neural network is activated and trained during all training steps of the training stage” are well-understood, routine, and conventional operations of a neural network being trained on whatever input they receive. The claims do not recite any particulars of the neural network that would be trained differently than nay generic neural network. Additionally, the claim does not carry patentable weight. The claimed features are predicated on being done “during a training stage of the neural network and the plurality of task-specific networks.” However, the claims do not recite that the steps performed by system, method, and medium, being claimed necessarily perform a training stage. Rather, the claims merely state that a training stage occurs somewhere by something, and allows for the training stage to be an operation performed outside the scope of the claims, e.g. prior to receiving the search query. Accordingly, the claim does not recite any steps required to be performed by the claims, nor does the claim limit the structure of any claimed elements. As such, the features of claims 6, 14, and 20 do not carry patentable weight. See MPEP §2111.04. Accordingly, they cannot possibly amount to significantly more or recite a practical application of the abstract idea since they are not elements performed by the claims. As to claim 7, the claim is rejected for the same reasons as claim 6 above. In addition, the claim merely describes further steps of a training stage that is not required to be performed by the claimed system. Accordingly, the features of claim 6 do not carry patentable weight, see MPEP §2111.04, and cannot possibly amount to significantly more or recite a practical application of the abstract idea since they are not elements performed by the claims. As to claim 8, the claim is rejected for the same reasons as claim 1 above. In addition, the claim recites the mental processes of wherein the plurality of tasks comprises at least one of: a product type classification task to determine a product type for the search query (A person can mentally interpret text and determine if an element corresponds to a type of product.); a query catalog classification task to determine a query catalog for the search query (A person can mentally determine where to search for something, i.e. mentally determine what catalog is most relevant.); a named entity recognition task to identify one or more named entities in the search query (A person can mentally identify entities in text using their personal knowledge.); or a term weighting task to determine whether or not each token in the search query is to be excluded when retrieving a search result (A person can mentally determine some form of simple weighting as they see fit.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 8-10, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (A Multi-task Learning Framework for Product Ranking with BERT. In Proceedings of the ACM Web Conference 2022 (WWW '22). 2022. Association for Computing Machinery, New York, NY, USA, 493–501.), hereinafter Wu in view of Dhamija et al. (US 2023/0186351 A1), hereinafter Dhamija. As to claim 1, Wu discloses a system, comprising: a processor (Pg. 499, Right Col. Lines 51-54); and a non-transitory memory storing instructions, that when executed, cause the processor to (Pg. 499, Right Col. Lines 51-54): obtain a search query (Fig. 1; Pg. 495, Left Col. Lines 30-33, Right Col. Lines 1-2; Pg. 499, Right Col. Lines 50-51, A query is received for returning ranked product results.), generate, using a neural network, query embedding data based on the search query , generate, using a plurality of task-specific networks, task prediction data for a plurality of tasks based on the query embedding data, wherein each task of the plurality of tasks captures a different aspect of a user intent associated with the search query (Fig. 1; Pg. 495, Left Col. Lines 36-38; Pg. 496, Left Col. Lines 29-51 and 53-56, Right Col. Lines 43-44, A plurality of gating networks, i.e. task-specific networks, are used to determine the relevance of each task and generate predictions corresponding to each task), and generate at least one search result for the search query based on the task prediction data for the plurality of tasks (Pg. 499, Right Col. Lines 50-51, E.g. returning 100 results from a received query. The query having been processed using the features of Fig. 1 as discussed above, and thus based on all features therein, including task prediction data.). Wu does not specifically disclose retrieve, using an entity retrieval model, at least one entity based on the search query, and generate, using a neural network, query embedding data based on the search query and the at least one entity [emphasis added]. However, Dhamija discloses obtain a search query (Figs. 1 #101, 3A #301, and 5 #501, An English query is received.), generate, using an entity retrieval model, at least one entity based on the search query (Fig. 1, #106; [0028], [0029], Lines 2-4, A deep learning transformer based model is used to identify named entities in received queries.), and generate, using a neural network, query embedding data based on the search query and the at least one entity (Figs. 3B #305 and 5 #504; [0029], [0040]-[0041], Layers (e.g. dense) of a neural network are used to generate embeddings for the different identified entities. As these are based on the queries received, the embeddings are based on both the entities and embeddings. Additionally, Fig. 5 generates expanded queries (combining parts of an original query modified an identified entity) from identified embeddings. These are similarly fed through Fig. 3A-B to generate similarity scores and the embeddings therein ([0081]-[0084]). Thus, further having embeddings based on based on a query and identified entity.). Before the effective filling date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Wu with the teachings of Dhamija by modifying Wu such that the BERT model (which is a transformer based model) is modified to identify entities in queries like the transformer based model utilizing nearest neighbors of Dhamija, such that the output query embedding is based on the query and identified entities similar to Dhamija. Said artisan would have been motivated to do so in order to expand queries with additional nearest neighbors and better match products and tasks with queries for product purchases in an e-commerce environment like is done with products with queries in Dhamija (Wu, Pg. 495, Left Col. Lines 25-41 , Right Col. Lines 11-25; Dhamija, [0028], [0040]). As to claim 9, Wu discloses a computer-implemented method, comprising: obtaining a search query (Fig. 1; Pg. 495, Left Col. Lines 30-33, Right Col. Lines 1-2; Pg. 499, Right Col. Lines 50-51, A query is received for returning ranked product results.); generating, using a neural network, query embedding data based on the search query ; generating, using a plurality of task-specific networks, task prediction data for a plurality of tasks based on the query embedding data, wherein each task of the plurality of tasks captures a different aspect of a user intent associated with the search query (Fig. 1; Pg. 495, Left Col. Lines 36-38; Pg. 496, Left Col. Lines 29-51 and 53-56, Right Col. Lines 43-44, A plurality of gating networks, i.e. task-specific networks, are used to determine the relevance of each task and generate predictions corresponding to each task); and generating at least one search result for the search query based on the task prediction data for the plurality of tasks (Pg. 499, Right Col. Lines 50-51, E.g. returning 100 results from a received query. The query having been processed using the features of Fig. 1 as discussed above, and thus based on all features therein, including task prediction data.). Wu does not specifically disclose retrieving, using an entity retrieval model, at least one entity based on the search query, and generating, using a neural network, query embedding data based on the search query and the at least one entity [emphasis added]. However, Dhamija discloses obtaining a search query (Figs. 1 #101, 3A #301, and 5 #501, An English query is received.), generating, using an entity retrieval model, at least one entity based on the search query (Fig. 1, #106; [0028], [0029], Lines 2-4, A deep learning transformer based model is used to identify named entities in received queries.), and generating, using a neural network, query embedding data based on the search query and the at least one entity (Figs. 3B #305 and 5 #504; [0029], [0040]-[0041], Layers (e.g. dense) of a neural network are used to generate embeddings for the different identified entities. As these are based on the queries received, the embeddings are based on both the entities and embeddings. Additionally, Fig. 5 generates expanded queries (combining parts of an original query modified an identified entity) from identified embeddings. These are similarly fed through Fig. 3A-B to generate similarity scores and the embeddings therein ([0081]-[0084]). Thus, further having embeddings based on based on a query and identified entity.). Before the effective filling date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Wu with the teachings of Dhamija by modifying Wu such that the BERT model (which is a transformer based model) is modified to identify entities in queries like the transformer based model utilizing nearest neighbors of Dhamija, such that the output query embedding is based on the query and identified entities similar to Dhamija. Said artisan would have been motivated to do so in order to expand queries with additional nearest neighbors and better match products and tasks with queries for product purchases in an e-commerce environment like is done with products with queries in Dhamija (Wu, Pg. 495, Left Col. Lines 25-41 , Right Col. Lines 11-25; Dhamija, [0028], [0040]). As to claim 15, Wu discloses a non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, cause at least one device to perform operations comprising (Pg. 499, Right Col. Lines 51-54): obtaining a search query (Fig. 1; Pg. 495, Left Col. Lines 30-33, Right Col. Lines 1-2; Pg. 499, Right Col. Lines 50-51, A query is received for returning ranked product results.); generating, using a neural network, query embedding data based on the search query ; generating, using a plurality of task-specific networks, task prediction data for a plurality of tasks based on the query embedding data, wherein each task of the plurality of tasks captures a different aspect of a user intent associated with the search query (Fig. 1; Pg. 495, Left Col. Lines 36-38; Pg. 496, Left Col. Lines 29-51 and 53-56, Right Col. Lines 43-44, A plurality of gating networks, i.e. task-specific networks, are used to determine the relevance of each task and generate predictions corresponding to each task); and generating at least one search result for the search query based on the task prediction data for the plurality of tasks (Pg. 499, Right Col. Lines 50-51, E.g. returning 100 results from a received query. The query having been processed using the features of Fig. 1 as discussed above, and thus based on all features therein, including task prediction data.). Wu does not specifically disclose retrieving, using an entity retrieval model, at least one entity based on the search query, and generating, using a neural network, query embedding data based on the search query and the at least one entity [emphasis added]. However, Dhamija discloses obtaining a search query (Figs. 1 #101, 3A #301, and 5 #501, An English query is received.), generating, using an entity retrieval model, at least one entity based on the search query (Fig. 1, #106; [0028], [0029], Lines 2-4, A deep learning transformer based model is used to identify named entities in received queries.), and generating, using a neural network, query embedding data based on the search query and the at least one entity (Figs. 3B #305 and 5 #504; [0029], [0040]-[0041], Layers (e.g. dense) of a neural network are used to generate embeddings for the different identified entities. As these are based on the queries received, the embeddings are based on both the entities and embeddings. Additionally, Fig. 5 generates expanded queries (combining parts of an original query modified an identified entity) from identified embeddings. These are similarly fed through Fig. 3A-B to generate similarity scores and the embeddings therein ([0081]-[0084]). Thus, further having embeddings based on based on a query and identified entity.). Before the effective filling date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Wu with the teachings of Dhamija by modifying Wu such that the BERT model (which is a transformer based model) is modified to identify entities in queries like the transformer based model utilizing nearest neighbors of Dhamija, such that the output query embedding is based on the query and identified entities similar to Dhamija. Said artisan would have been motivated to do so in order to expand queries with additional nearest neighbors and better match products and tasks with queries for product purchases in an e-commerce environment like is done with products with queries in Dhamija (Wu, Pg. 495, Left Col. Lines 25-41 , Right Col. Lines 11-25; Dhamija, [0028], [0040]). As to claims 2, 10, and 16, the claims are rejected for the same reasons as claims 1, 9, and 15 above. In addition, Wu, as previously modified with Dhamija, discloses wherein the at least one entity is retrieved based at least in part by: encoding and normalizing the search query into a normalized query embedding using a natural language mode (Wu, Fig. 1 Pg. 495, Right Col. Lines 1-2, 10-12, and 23-25; A query embedding is generated from a BERT model which will normalize search queries into a normalized query embedding; Dhamija, Fig. 5; [0039], Named entity embeddings are generated through normalization layers.); determining at least one index associated with at least one normalized entity embedding generated from one or more entities, based on a nearest neighbor search performed using the normalized query embedding to identify the at least one normalized entity embedding among a plurality of normalized entity embeddings (Dhamija, Fig. 5, [0029], [0072]-[0073], Nearest neighbor entities are located, filtered/sorted, thus determining an equivalent claimed index to locate and retrieve them.); and locating and retrieving the one or more entities from an entity database based on the at least one index (Dhamija, Fig. 5, [0029], [0072]-[0073], Nearest neighbor entities are located, filtered/sorted, thus determining an equivalent claimed index to locate and retrieve them.). The reasons and motivations for combining the teachings of Wu and Dhamija are the same as previously set forth with respect to claims 1, 9, and 15 above. As to claim 8, the claim is rejected for the same reasons as claim 1 above. In addition, Wu, as previously modified with Dhamija, discloses wherein the plurality of tasks comprises at least one of: a product type classification task to determine a product type for the search query (Wu, Pg. 497, Left Col. Lines 17-44, classifying products as to likelihood to tasks like click-through, add to cart, and purchase; Dhamija, [0059], [0071], E.g. as part of identifying product related named entities.); a query catalog classification task to determine a query catalog for the search query; a named entity recognition task to identify one or more named entities in the search query (Dhamija, Fig. 1, #106; [0028], [0029], Lines 2-4, [0059], [0071], NER); or a term weighting task to determine whether or not each token in the search query is to be excluded when retrieving a search result. Claims 6, 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wu and Dhamija as applied above, and further in view of (Zhang et al., "A Survey on Multi-Task Learning," in IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 12, pp. 5586-5609, 1 Dec. 2022, doi: 10.1109/TKDE.2021.3070203.), hereinafter Zhang. As to claims 6, 14, and 20, the claims are rejected for the same reasons as claims 1, 9, and 15 above. In addition, Wu, as previously modified with Dhamija, discloses wherein during a training stage of the neural network and the plurality of task-specific networks: ; and the neural network is activated and trained during all training steps of the training stage (Wu, Pg. 495, Right Col. Lines 10-25, Dhamija, [0065]-[0068], No indication is made of the neural network not being on and trained during the training stage.). Wu, as previously modified with Dhamija, does not specifically disclose only one of the plurality of task-specific networks is activated during each training step of a plurality of training steps of the training stage. However, it is noted that the claimed features are predicated on being done “during a training stage of the neural network and the plurality of task-specific networks.” However, the claims do not recite that the steps performed by system, method, and medium, being claimed necessarily perform a training stage, and by extension the steps therein. Rather, the claims merely state that a training stage occurs somewhere by something, and allows for the training stage to be an operation performed outside the scope of the claims, e.g. prior to receiving the search query. Accordingly, the claim does not recite any steps required to be performed by the claims, nor does the claim limit the structure of any claimed elements. As such, the features of claims 6, 14, and 20 do not carry patentable weight and need not be disclosed by the prior art in rejecting the claims. See MPEP §2111.04. Accordingly, claims 6, 14, and 20 are fully rejected for the same reasons as claims 1, 9, and 15 above. Additionally, and solely for more compact prosecution, Zhang discloses one of the plurality of task-specific networks is activated during each training step of a plurality of training steps of the training stage (Fig. 1(b)-1(c); Pg. 5587, Left Col. Lines 24-35, Each multi-task learning task is trained separately as their own training steps. Thus, analogous to only one active as claimed.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Wu, as previously modified with Dhamija, with the teachings of Zhang, by further modifying the multi-task learning of Wu (Wu, Fig. 1; Pg. 495, Left Col. Lines 24-33) to train each task like is done with the multi-task learning of Zhang. Said artisan would have been motivated to do so in order to have each task learn its own in single-task when appropriate while also still leveraging multi-task learning (Zhang, Pg. 5588, Left Col. Lines 7-23). As to claim 7, the claim is rejected for the same reasons as claim 6 above. In addition, Wu, as previously modified with Dhamija and Zhang, does not disclose wherein during each training step for training a task- specific network associated with a corresponding task, the processor is caused to: retrieve a plurality of entities for each query in a training dataset using the entity retrieval model; for each entity of the plurality of entities: generate an entity representation based at least in part by averaging token embeddings of all entity tokens of the entity, compress the entity representation into a representation score for the entity using a dense network, determine a labelled score for the entity, wherein the labelled score is generated by the entity retrieval model based on historical user engagement data, and generate a ranking loss for the entity based on the labelled score and the representation score; generate a ranking loss function based on a combination of all ranking losses for the plurality of entities; generate a task loss function for the corresponding task based on a cross- entropy loss and the training dataset; generate, for the corresponding task, a combined loss function based on a weighted combination of the task loss function for the corresponding task and the ranking loss function, using weights specific to the corresponding task; and train the neural network and the task-specific network based at least in part by minimizing the combined loss function. However, like as set forth with respect to parent claim 6 above, the claimed features are predicated on being done “during each training step” “during a training stage of the neural network and the plurality of task-specific networks.” However, again, the claims do not recite that the steps performed by system claimed necessarily perform a training stage, and thus the corresponding steps therein. Rather, the claims merely state that a training stage occurs somewhere by something, and allows for the training stage to be an operation performed outside the scope of the claims, e.g. prior to receiving the search query. Accordingly, the claim does not recite any steps required to be performed by the claims, nor does the claim limit the structure of any claimed elements. As such, the features of claim 7 does not carry patentable weight and need not be disclosed by the prior art in rejecting the claims. See MPEP §2111.04. Accordingly, claims 7 is fully rejected for the same reasons as claim 6 above. Allowable Subject Matter Claims 3-5, 11-13, and 17-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Shah et al. (US 2022/0129633 A1) discloses extracting entities and intents from natural language queries, and based on determined intents and their scores searching respective data stores using the extracted entities (Figs. 4-5). E.g. searching stores, products sold, and order information databases depending on the intent of the query (Fig. 1). Query intent loss functions are used in training a neural network using BERT for entity detection (Fig. 2). Jayarao et al. (US 2022/0277143 A1) discloses identifying intents and entities from a query by tokenizing the query, determining embeddings from multiple tasks, and combining the embeddings into an intent and entity classifier (Fig. 4). Ramamohan (US 2022/0365955 A1) discloses processing a natural language query by multiple tasks to extract entities and perform query expansion, then to generate an embedding based on the query and entity as a combined vector used to match against to determine results (Fig. 2A). Terms in the query can be scored to determine if irrelevant and should be removed ([0032].). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES E RICHARDSON whose telephone number is (571)270-1917. The examiner can normally be reached Mon-Fri 9:00-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /James E Richardson/ Primary Examiner, Art Unit 2167
Read full office action

Prosecution Timeline

Mar 13, 2025
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §103
Mar 27, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585638
QUERY EXECUTION USING A DATA PROCESSING SCHEME OF A SEPARATE DATA PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12579112
LOCATION DATA PROCESSING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572273
SYSTEM AND METHOD FOR KEY-VALUE SHARD CREATION AND MANAGEMENT IN A KEY-VALUE STORE
2y 5m to grant Granted Mar 10, 2026
Patent 12572534
SELECTION QUERY LANGUAGE METHODS AND SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12566756
EFFICIENT EVENT-TYPE-BASED DISTRIBUTED LOG-ANALYTICS SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+31.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 506 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month