Prosecution Insights
Last updated: April 19, 2026
Application No. 17/198,052

LEARNING APPARATUS, LEARNING METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM

Final Rejection §101§103
Filed
Mar 10, 2021
Examiner
GILLS, KURTIS
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Yahoo Japan Corporation
OA Round
6 (Final)
57%
Grant Probability
Moderate
7-8
OA Rounds
3y 4m
To Grant
87%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
307 granted / 536 resolved
+5.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
37.5%
-2.5% vs TC avg
§103
42.7%
+2.7% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 536 resolved cases

Office Action

§101 §103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Notice to Applicant In response to the communication received on 11/18/2025, the following is a Final Office Action for Application No. 17198052. Status of Claims Claims 1, 6, 9, and 10 are pending. Claims 2-5 and 7-8 are canceled. Priority As required by M.P.E.P. 201.14(c), acknowledgement is made of applicant’s claim for priority based on: 17198052 filed 03/10/2021 and having 2 RCE-type filing therein; claims foreign priority to 2020-050226, filed 03/19/2020. Response to Amendments Applicant’s amendments have been fully considered. Applicant’s amendments to the claims overcome the 35 U.S.C 102 rejection, and hence the 35 U.S.C. 102 rejection has been withdrawn. A new grounds of rejection, as necessitated by amendment, is provided herein. Response to Arguments Applicant’s arguments with respect to the claims have been considered but are not persuasive. As per the 101 rejection, Applicant argues that the claims are in favor of eligibility per Prong One of Step 2A, however Examiner respectfully disagrees. Per Prong One of Step 2A, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity. Particularly, the identified recitation falls within the Mental Processes including concepts performed in the human mind (including an observation, evaluation judgment, opinion) and/or Certain Methods of Organizing Human Activity including managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules of instructions). Since the recitation of the claims falls into at least one of the above Groupings, there is a basis for providing further analysis with regard to Prong Two of Step 2A to determine whether the recitation of an abstract idea is deduced to being directed to an abstract idea. Thus, the rejection is maintained. Applicant argues that the claims are in favor of eligibility per Prong Two of Step 2A, however Examiner respectfully disagrees. Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The non-transitory computer readable storage medium, computer and/or processor is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing/transmitting data. This generic processor server limitation is no more than mere instructions to apply the exception using a generic computer component. Further, non-transitory computer readable storage medium, computer and/or processor to inter alia perform the function of predicting a search query by comparing the vector generated from the search queries from the target user against vectors generated from the transition categories is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. In other words, the present claims use a generic processing device and memory medium to inter alia perform the function of predicting a search query by comparing the vector generated from the search queries from the target user against vectors generated from the transition categories which is a concept that can be performed in the human mind. The processor is merely used to perform the function(s), and the processor does not integrate the abstract idea into a practical application since there are no meaningful limits on practicing the abstract idea. Thus, since the claims are directed to the determined judicial exception in view of the two prongs of Step 2A, the 2019 PEG flowchart is directed to Step 2B. Thus, the rejection is maintained. Applicant argues that the claims are in favor of eligibility per Step 2B, however Examiner respectfully disagrees. Therein, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of: non-transitory computer readable storage medium, computer and/or processor. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, non-transitory computer readable storage medium, computer and/or processor to inter alia perform the function of predicting a search query by comparing the vector generated from the search queries from the target user against vectors generated from the transition categories is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic computer/memory type structure. Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include the non-limiting or non-exclusive examples of MPEP § 2106.05. Thus, the rejection is maintained. In an effort to further expedite prosecution, see: July 2024 Subject Matter Eligibility Examples, Example 47. Anomaly Detection. Per the analysis of claim 2 Example 47, the analysis refers to MPEP 2106.05(f) which provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Although the additional elements, e.g. (per Example 47) “using a trained ANN”, limits the identified judicial exceptions, e.g. (per Example 47) “detecting one or more anomalies in a data set using the trained ANN” and, e.g. (per Example 47) “analyzing the one or more detected anomalies using the trained ANN to generate anomaly data,” this type of limitation merely confines the use of the abstract idea to a particular technological environment, e.g. (per Example 47: neural networks) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). As an exemplary direction for claim limitations to be eligible, see claims 1 and 3 of Example 47. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 6, 9, and 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claims fall within statutory class of process or machine or manufacture; hence, the claims fall under statutory category of Step 1. Step 2 is the two-part analysis from Alice Corp. (also called the Mayo test). The 2019 PEG makes two changes in Step 2A: It sets forth new procedure for Step 2A (called “revised Step 2A”) under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. The two-prong inquiry is as follows: Prong One: evaluate whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). If claim recites an exception, then Prong Two: evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception. The claim(s) recite(s) the following abstract idea indicated by non-boldface font and additional limitations indicated by boldface font: 1. A learning device comprising:a processor configured to: acquire time-series search queries input by a plurality of input customers who have input a reference query, the time-series search queries having been input in mutually different periods; acquire, from an operator device, designated categories for grouping time-series search queries, wherein each designated category is designated by an operator; categorize, for each input customer, the time-series search queries input in each period into the designated categories based on content and similarities; specify, for each input customer, a transition mode of the categorized time-series search queries input in each period, wherein each transition mode identifies a sequence of the designated categories; categorize, for each input customer, the transition mode into one of transition categories, the transition categories being higher-level categories of the designated categories and representing behavioral patterns of users transitioning between designated categories over the mutually different time periods, wherein each transition category identifies a path to the reference query, and wherein the path indicates a transition from one of the designated categories to another one of the designated categories; cause for each transition categorv, train, using (i) embedding vectors corresponding to the time-series search queries input by the plurality of input customers and (ii) a one-hot vector representing the transition mode of the plurality of input customers, a machine learning model to learn, for each transition category, temporal characteristics of behavioral transition patterns of the characteristic of change in the designated categories such that the machine learning model after training (i) receives search queries input by a target user; and (ii) generates a vector representing the transition mode of the target user; and for receiving the search queries input by the target user at a time later than the time-series search queries input by the plurality of input customers, predict future search behavior based on historical transition sequences a search query by comparing the vector generated from the search queries from the target user against vectors generated from the transition categories. [or] 9. A learning method executed by a processor, the learning method including: acquiring time-series search queries input by a plurality of input customers who have input a reference query, the time-series search queries having been input in mutually different periods; acquiring, from an operator device, designated categories for grouping time-series search queries, wherein each designated category is designated by an operator; categorizing, for each input customer, the time-series search queries input in each period into the designated categories based on content and similarities; specifying, for each input customer, a transition mode of the categorized time-series search queries input in each period, wherein each transition mode identifies a sequence of the designated categories; categorizing, for each input customer, the transition mode into one of transition categories, the transition categories being higher-level categories of the designated categories and representing behavioral patterns of users transitioning between designated categories over the mutuallydifferent time periods, wherein each transition category identifies a path to the reference query, and wherein the path indicates a transition from one of the designated categories to another one of the designated categories; for each transition category, train, using (i) embedding vectors corresponding to the time-series search queries input by the plurality of input customers and (ii) a one-hot vector representing the transition mode of the plurality of input customers, a machine learning model to learn, for each transition category, temporal characteristics of behavioral transition patterns of the characteristic of change in the designated categories such that the machine learning model after training (i) receives search queries input by a target usert and (ii) generates a vector representing the transition mode of the target user; andfor receiving the search queries input by the target user at a time later than the time-series search queries input by the plurality of input customers, predicting future search behavior based on historical transition sequences by comparing the vector generated from the search queries from the target user against vectors generated from the transition categories. [or] 10. A non-transitory computer readable storage medium having a learning program stored thereon, the learning program causes a computer to perform: acquiring time-series search queries input by a plurality of input customers who have input a reference query, the time-series search queries having been input in mutually different periods; acquiring, from an operator device, designated categories for grouping time-series search queries, wherein each designated category is designated by an operator; categorizing, for each input customer, the time-series search queries input in each period into the designated categories based on content and similarities; specifying, for each input customer, a transition mode of the categorized time-series search queries input in each period, wherein each transition mode identifies a sequence of the designated categories; categorizing, for each input customer, the transition mode into one of transition categories, the transition categories being higher-level categories of the designated categories and representing behavioral patterns of users transitioning between designated categories over the mutually different time periods, wherein each transition category identifies a path to the reference query,and wherein the path indicates a transition from one of the designated categories to another one of the designated categories; The claim(s) recite(s) the following summarization of the abstract idea which includes predicting a search query by comparing vectors of transition modes via transition categories and designated categories of time-series search queries groupings executed by the additional element(s) of non-transitory computer readable storage medium, computer and/or processor. This falls into at least the Abstract Idea Grouping of Mental Processes since the information can be analyzed by an abstract evaluation judgment process. Thus, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity since the identified recitation falls within the Mental Processes including concepts performed in the human mind (including an observation, evaluation judgment, opinion). Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The non-transitory computer readable storage medium, computer and/or processor is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing/transmitting data. This generic non-transitory computer readable storage medium, computer and/or processor limitation is no more than mere instructions to apply the exception using a generic computer component. Further, predicting a search query by comparing the vector generated from the search queries from the target user against vectors generated from the transition categories by a non-transitory computer readable storage medium, computer and/or processor is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, since the claims are directed to the determined judicial exception in view of the two prongs of Step 2A, the 2019 PEG flowchart is directed to Step 2B. Per Step 2B, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of: non-transitory computer readable storage medium, computer and processor. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, predicting a search query by comparing the vector generated from the search queries from the target user against vectors generated from the transition categories by a non-transitory computer readable storage medium, computer and/or processor is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic computer/memory type structure at page 45 wherein “The control unit 400 is a controller and can be realized, for example, when a processor such as a Central Processing Unit (CPU) or a Micro Processing Unit (MPU) executes a various program(s).” Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); PNG media_image1.png 18 19 media_image1.png Greyscale ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); PNG media_image1.png 18 19 media_image1.png Greyscale iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or PNG media_image1.png 18 19 media_image1.png Greyscale v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine/manufacture for performing the present claims); and receiving or transmitting data (e.g., the present claims). The dependent claims do not cure the above stated deficiencies, and in particular, the dependent claims further narrow the abstract idea without reciting further additional elements that integrate the exception into a practical application of the exception or providing significantly more than the abstract idea. In particular, claim 6 recites said processor that was stated in the independent claim with a further narrowing of the abstract idea without reciting further additional elements that integrate the exception into a practical application of the exception or providing significantly more than the abstract idea. Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Thus, viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6, 9, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Tavernier (US 10706450 B1) hereinafter referred to as Tavernier in view of Khan (CA 3102015 A1) hereinafter referred to as Khan. Tavernier teaches: Claim 1. A learning device comprising: a processor configured to (C.13 L.60 The interactive computing system 400 may include at least one memory 420 and one or more processing units (or processor(s)) 410. The memory 420 may include more than one memory and may be distributed throughout the interactive computing system 400. The memory 420 may store program instructions that are loadable and executable on the processor(s) 410 as well as data generated during the execution of these programs.): acquire time-series search queries input by a plurality of input customers who have input a reference query, the time-series search queries having been input in mutually different periods (C.16 L.49 The user interaction data repository 434 comprises one or more physical data storage devices that stores logged user behaviors with respect to representations of items within a network environment (“item interactions”) together with time data for the events. Interactions can also include user selection of catalog browse nodes and/or search refinement options, and item attribute value selections/inputs. The time data for the events can be a timestamp or other suitable date/time indicator, and can be used to generate item interaction timelines. Each of the item interactions and associated time data may be stored in, and maintained by, the user interaction data repository 434 with respect to a timeline for a particular user or aggregate group of users. Portions of these timelines can be associated with particular search queries.); acquire, from an operator device, designated categories for grouping time-series search queries, wherein each designated category is designated by an operator (C.12 L.18 At block 305, the process 300 begins in response to receiving an input search query from a user, for example through a search engine of an electronic catalog service. At block 310, the machine learning system performs a semantic parse of the search query to predict the catalog fields that relate to the user's intent, for example a model trained using the process 200. For example, at block 310 the trained model can produce a semantic parse that labels each portion of the query and grounds it to a catalog field. To illustrate, given query of “red dress,” the trained model can produce as output the equivalent of “red is a color and it corresponds to browse node ID=123,” and “dress is an apparel item and it corresponds to category 234.”); categorize, for each input customer, the time-series search queries input in each period into the designated categories based on content and similarities (C.12 L.18 At block 305, the process 300 begins in response to receiving an input search query from a user, for example through a search engine of an electronic catalog service. At block 310, the machine learning system performs a semantic parse of the search query to predict the catalog fields that relate to the user's intent, for example a model trained using the process 200. For example, at block 310 the trained model can produce a semantic parse that labels each portion of the query and grounds it to a catalog field. To illustrate, given query of “red dress,” the trained model can produce as output the equivalent of “red is a color and it corresponds to browse node ID=123,” and “dress is an apparel item and it corresponds to category 234.”);); specify, for each input customer, a transition mode of the categorized time-series search queries input in each period, wherein each transition mode identifies a sequence of the designated categories (C.18 L.30 When the user transitions from the search results user interface 100 to the detail page 120 for one of the items, any recommended items shown on that page are filtered to match the determined intent of the user. For example, a set of recommended items associated with the item of the detail page can be filtered to only display items that fall within the identified catalog fields. The recommendations may also be presented with an indication that ties back to the initial query, e.g., “humans who viewed this also viewed these other red dresses.” Further, any user-selectable menus on the item detail page relating to the attributes in the query can be pre-selected with the values of the query. For example, if the user selects to view the detail page of a dress that is available in black and red, the option of “red” can be automatically selected based on the user's query. C.12 L.61 At block 335, the catalog service adjust (e.g., filter or re-rank) recommendations for display on the detail page based on the identified catalog fields. As described above, the recommendations can be obtained based on view-based similarities or purchase-based similarities. U.S. Pat. No. 6,912,505, incorporated by reference above, represents one example of how these recommendations may be generated. A behavior-based item-to-item mapping table may be accessed to look up the items most closely associated (based, e.g., on item views or item purchases) with the user-selected item. These items can then be filtered such that only items that match the catalog fields of the user intent are displayed, or re-ranked such that items that match the catalog fields are surfaced above items that do not (or above items that match only some of the catalog fields)); categorize, for each input customer, the transition mode into one of transition categories, the transition categories being higher-level categories of the designated categories and representing behavioral patterns of users transitioning between designated categories over the mutually different time periods, wherein each transition category identifies a path to the reference query, and wherein the path indicates a transition from one of the designated categories to another one of the designated categories (C.16 L.15 In the context of the electronic catalog, item data can include names, images, brands, prices, descriptions, user reviews (textual or numerical ratings), category/subcategory within a hierarchy of browsable categories of the electronic catalog, high-level category within a general ledger of the electronic catalog, particular services or subscriptions for which the item qualifies, and any metadata associated with specific items of the catalog. The item data repository 432 also stores data representing item attributes. Item attributes represent characteristics of these items, for example category (or another node in the item hierarchy), brand, gender (e.g., whether the item is typically considered as suitable by males, females, or gender-neutral), target age-range (e.g., baby, child, adult), price,… C.16 L.33 The items can be associated with one or more nodes in a hierarchical catalog structure, with a general “items” (e.g., all products) root-level browse node category and more detailed child and descendant browse node categories. Some or all of the items may be associated with one or more categories. In particular, an item can be associated with a leaf-node category, which may be most specific to that item. In turn, this leaf-node category may be part of a broader category (e.g., is connected by an edge to a broader category node), which is in turn part of a broader category still, and so on, up and until the root nod); for each transition category, train, using (i) embedding vectors corresponding to the time-series search queries input by the plurality of input customers and (ii) a one-hot vector representing the transition mode of the plurality of input customers, a machine learning model to learn, for each transition category, temporal characteristics of behavioral transition patterns of the characteristic of change in the designated categories such that the machine learning model after training (i) receives search queries input by a target user; and (ii) generates a vector representing the transition mode of the target user (C.8 L.12 Expectation maximization can be useful for discovering associations between synonyms or acronyms of words in a catalog field and the corresponding catalog field, and can be performed as a patch process to discover these associations. The distributed representations 265 can learn vector embeddings of words in search queries, for example from a corpus of terms of the electronic catalog (e.g., terms in item descriptions, reviews, question and answer sections, and the like). This can be used to represent each word in the search query as a vector for input into the Viterbi decoding algorithm 250, where such vectors capture similarity between words using proximity in the vector feature space. C.9 L.26 Training this model includes using the autolabeled queries as training data, where the query (or a vector representation of the query) is provided to the machine learning model as input and the associated catalog fields (or a vector representation of these catalog fields) are provided to the machine learning model as expected output.); and for receiving the search queries input by the target user at a time later than the time-series search queries input by the plurality of input customers, predict future search behavior based on historical transition sequences by comparing the vector generated from the search queries from the target user against vectors generated from the transition categories (C.3 L.25 the disclosed system preferably mines large quantities of collected behavioral data (including search submissions and associated item selection actions of users) for purposes of building a model that can accurately predict or infer the intent associated with a search query. This concept of “intent” grounds the terms of the search query within the context of the electronic catalog to give them meaning with respect to the particular types of items that would satisfy the user's mission C.9 L.15 the machine learning system trains one or more machine learning models to predict catalog fields from input queries. This can be considered as a form of translation, as words, phrases, and semantic content of the search query are translated into one or more catalog fields, with the catalog fields representing specific types of items in the catalog. In one example, a first machine learning model (e.g., a sequence-to-sequence model or transformer network) learns to perform a semantic parse to identify different portions of the query, to label these portions, and to ground the labeled portions to catalog fields. Training this model includes using the autolabeled queries as training data, where the query (or a vector representation of the query) is provided to the machine learning model as input and the associated catalog fields (or a vector representation of these catalog fields) are provided to the machine learning model as expected output. Some implementations can apply a decay to training data to give greater weight to the most recent data. The system tunes the internal parameters of the machine learning model to predict the expected output given the input. Once trained, the output of this model can be used to identify the catalog fields, or in other embodiments can be provided to a second machine learning model as candidate catalog fields.). Although not explicitly taught by Tavernier, Khan teaches in the analogous art of system for location-based product solutions using artificial intelligence: and representing behavioral patterns of users transitioning between designated categories over the mutually different time periods (¶0028 The Artificial Intelligence (AI) engine can use the stored data to create metadata for each user based on their behavior, and can assign the user to different categories based on that behavior or other factors. Exemplary metadata which could be attached to the user profile can identify aspects of the user behavior (e.g., always gets ice cream before visiting a certain store). Exemplary categories which could be assigned to a user can include demographic information (age, gender, etc.), as well as the type of stores frequented (posh v. thrifty, food v. retail, etc.). ¶0054 the system can determine that those actions (the drop and subsequent addition), were the same user. Likewise, as users enter and leave various geofences 510, 514, the system can determine that the actions were performed by the same user (for example, because of entering and leaving various zones within various time periods).); for each transition category, train, using (i) embedding vectors corresponding to the time-series search queries input by the plurality of input customers and (ii) a one-hot vector representing the transition mode of the plurality of input customers, a machine learning model to learn, for each transition category, temporal characteristics of behavioral transition patterns of the characteristic of change in the designated categories (¶0032-0034 These multi-dimensional vectors can be referred to as "embeddings." To create an embedding for an individual user, the system can take the embeddings of all the user trips (dimension N), the user country (dimension M), and the user's metadata (dimension P). The system can append them to one another, to create the user's Embedding (dimension N + M + P). The system can then input the user's embedding into a fully connected layer, which in turn creates a new embedding (dimension Q). Using the triplet loss function, these embeddings can be pushed further apart or moved closer together. In some configurations, the embeddings of multiple trips can be averaged together, resulting in a singular embedding which represents either a single user's general patterns, or (if the trips are from multiple users) can represent the general patterns of multiple users. Within a given embedding, whether representing an individual or multiple users, vectors that are closer to one another can indicate higher similarity ¶0034-0035 if a store is visited by X number of people, the system can look at the people similar to those X people within the Q dimension vector space who have not visited the store, and an advertiser should target those people because they have similar attributes to those who have visited the store. The AI engine can also provide, as an output, a likely path that a given user is likely to follow based on their previous locations.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system for location-based product solutions using artificial intelligence of Khan with the artificial intelligence system for generating intent-aware recommendations of Tavernier for the following reasons: (1) a finding that there was some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings, e.g. Tavernier ¶0001 teaches that it is desirable to use interactive systems implement services for generating customized content suggestions for items stored or represented in a data repository; (2) a finding that there was reasonable expectation of success since the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference, e.g. Tavernier Abstract teaches using machine learning models to determine user intent from a search query, for example via a semantic parse that identifies particular catalog fields for items in an electronic catalog that would satisfy the user's current mission as reflected in their search query intent, and Khan Abstract teaches using first and third party data as inputs to an Artificial Intelligence (AI) engine which provides suggestions regarding product placement and marketing; and (3) whatever additional findings based on the Graham factual inquiries may be necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness, e.g. Tavernier at least the above cited paragraphs, and Khan at least the inclusively cited paragraphs. Therefore, it would be obvious to one skilled in the art at the time of the invention to combine the system for location-based product solutions using artificial intelligence of Khan with the artificial intelligence system for generating intent-aware recommendations of Tavernier. The rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and whether there would have been a reasonable expectation of success in doing so." DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006). See MPEP 2143(G). Tavernier teaches: Claim 6. The learning device according to claim 1, wherein the processor is further configured to perform learning of the machine learning model so that a similar vector is output if the search queries corresponding to a similar change in the designated categories are input and that a dissimilar vector is output if the search queries corresponding to a dissimilar change in the designated categories are input (C.8 L.12 Expectation maximization can be useful for discovering associations between synonyms or acronyms of words in a catalog field and the corresponding catalog field, and can be performed as a patch process to discover these associations. The distributed representations 265 can learn vector embeddings of words in search queries, for example from a corpus of terms of the electronic catalog (e.g., terms in item descriptions, reviews, question and answer sections, and the like). This can be used to represent each word in the search query as a vector for input into the Viterbi decoding algorithm 250, where such vectors capture similarity between words using proximity in the vector feature space. C.9 L.26 Training this model includes using the autolabeled queries as training data, where the query (or a vector representation of the query) is provided to the machine learning model as input and the associated catalog fields (or a vector representation of these catalog fields) are provided to the machine learning model as expected output. Some implementations can apply a decay to training data to give greater weight to the most recent data. The system tunes the internal parameters of the machine learning model to predict the expected output given the input.). As per claims 9 and 10, the method and non-transitory computer readable storage medium tracks the device of claims 1 and 1, respectively, resulting in substantially similar limitations. The same cited prior art and rationale of claims 1 and 1 are applied to claims 9 and 10, respectively. Tavernier discloses that the embodiment may be found as a method and non-transitory computer readable storage medium (Fig. 1 and C.7 L.8 a set of executable program instructions stored on one or more non-transitory computer-readable media (e.g., hard drive, flash memory, removable media, etc.) may be loaded into memory (e.g., random access memory or “RAM”) of a server or other computing device). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KURTIS GILLS whose telephone number is (571)270-3315. The examiner can normally be reached on M-F 8-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on 5712726787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KURTIS GILLS/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Mar 10, 2021
Application Filed
Aug 13, 2022
Non-Final Rejection — §101, §103
Nov 23, 2022
Applicant Interview (Telephonic)
Nov 23, 2022
Examiner Interview Summary
Dec 16, 2022
Response Filed
Mar 23, 2023
Final Rejection — §101, §103
Aug 28, 2023
Request for Continued Examination
Aug 29, 2023
Response after Non-Final Action
Sep 19, 2023
Non-Final Rejection — §101, §103
Nov 14, 2023
Applicant Interview (Telephonic)
Dec 13, 2023
Examiner Interview Summary
Jan 12, 2024
Examiner Interview Summary
Jan 12, 2024
Applicant Interview (Telephonic)
Jul 18, 2024
Applicant Interview (Telephonic)
Nov 07, 2024
Response Filed
Jan 24, 2025
Final Rejection — §101, §103
Jun 27, 2025
Request for Continued Examination
Jul 03, 2025
Response after Non-Final Action
Aug 08, 2025
Non-Final Rejection — §101, §103
Oct 22, 2025
Examiner Interview Summary
Oct 22, 2025
Applicant Interview (Telephonic)
Nov 18, 2025
Response Filed
Feb 05, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602664
INTELLIGENT MEETING TIMESLOT ANALYSIS AND RECOMMENDATION
2y 5m to grant Granted Apr 14, 2026
Patent 12572864
AVOIDING PROHIBITED SEQUENCES OF MATERIALS PROCESSING AT A CRUSHER USING PREDICTIVE ANALYTICS
2y 5m to grant Granted Mar 10, 2026
Patent 12572872
Mine Management System
2y 5m to grant Granted Mar 10, 2026
Patent 12567013
METHOD AND SYSTEM FOR SOLVING SUBSET SUM MATCHING PROBLEM USING DYNAMIC PROGRAMMING APPROACH
2y 5m to grant Granted Mar 03, 2026
Patent 12561703
SYSTEM AND METHOD FOR PERSONA GENERATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
57%
Grant Probability
87%
With Interview (+29.4%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 536 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month