Prosecution Insights
Last updated: April 19, 2026
Application No. 17/607,702

ELECTRONIC DEVICE AND METHOD FOR CONTROLLING ELECTRONIC DEVICE

Non-Final OA §101§103
Filed
Oct 29, 2021
Examiner
VANWORMER, SKYLAR K
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
39%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
62%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
11 granted / 28 resolved
-15.7% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
29 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
61.4%
+21.4% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/28/2025 has been entered. Response to Arguments Applicant's arguments filed 11/28/2025 have been fully considered but they are not persuasive. In regard to the 101 rejections, see Applicant’s arguments, pgs. 7-8, Applicant argues that there are improvements to the area of computer technology. Examiner would like to point out, that even with the amendments the claims still recite mental processes that can be done with pen or paper, further detailed in the rejection below. There are also other aspects, but most of the are mere instructions to apply the exception using generic computer components or generally linking the use of the exception to particular technological environment, (see MPEP 2106.05(f) and MPEP 2106.05(h) for more details). Specifically for the new claim 16: Step 2A, prong 2 and Step 2B: wherein the first neural network model is trained to output the plurality of keywords and the importance values so that an answer to a user's question is included in a plurality of documents and included in a document with a higher search order among the plurality of documents, and (e.g., mere instructions to apply the judicial exception using generic computer components, see MPEP 2106.05(f)) wherein the second neural network model is trained to output the at least one search word so that the answer to the user's question is included in the plurality of documents and included in the document with the higher search order among the plurality of documents. (e.g., mere instructions to apply the judicial exception using generic computer components, see MPEP 2106.05(f)) Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Therefore the 35 USC 101 rejection is maintained. Applicant’s arguments with respect to claims 1-16 have been considered. However, previous prior art Wu has been mapped in combination with previous reference Srinivasan to teach the newly added claim 16. Specifically: wherein the first neural network model is trained to output the plurality of keywords and the importance values so that an answer to a user's question is included in a plurality of documents and included in a document with a higher search order among the plurality of documents, and (Wu, paragraph 0030, “In response to the search query, search engine 120 extracts one or more keywords (also referred to as search terms) from the search query [a plurality of questions and answers to the plurality of questions,]. Search engine 120 performs a search in content database 133, which may include primary content database [a higher search order] 130 and/or auxiliary content database 131, to identify a list of content items that are related to the keywords.” And paragraph 0025, “According to one embodiment, when a search query is received from a user device of a user, a first set of features are determined based on the search query, the user device, the user, as well as other related information (e.g., history log, etc.). A bloom filter of a neural network model [the first neural network model is configured to output the plurality of keywords] is applied to the first set of features to generate a second set of features. The second set of features are then fed to a neural network model of a particular category to derive an output value representing a likelihood (e.g., probability) [the importance values] that the user is associated with that particular category. A search is then conducted in a content database based on the search query and the user category of the user, such that better content can be served to target the user.”) wherein the second neural network model is trained to output the at least one search word so that the answer to the user's question is included in the plurality of documents and included in the document with the higher search order among the plurality of documents. (Wu, paragraph 0046, “At block 603, the second set of features are provided to a neural network model being trained. The second set of features may be fed to the visible layer of nodes of the neural network model, where the neural network model may include one or more hidden layers of nodes. An output is generated. At block 604, processing logic determines whether the output satisfies a predetermined condition or a target value (e.g., probability) that was set for the neural network model. If it is determined the output does not satisfy the predetermined condition or target, at block 605, certain parameters of the bloom filter and/or the neural network model may be adjusted, and the above operations may be iteratively performed to fine tune the bloom filter and/or the neural network model.”) Wu teaches neural network models outputting features which are being interpreted as the keywords extracted from the documents. The primary content database is interpreted as the high search order among the documents. Applicant argues that Wu, Li and Srinivasan fail to teach the newly added limitation of claim 1 including previous limitations. Specifically: obtain text information corresponding to a user's question; (Wu, paragraph 0033, “The search engine looks for the words or phrases exactly as entered [obtain text information]. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for b[to a user's question]. As well, natural language queries allow the user to type a question in the same form one would ask it to a human.”) Above, Wu teaches looking for words or phrases entered by users being interpreted as the text information. obtain a plurality of keywords related to the user's question and importance values for respective keywords by inputting the text information corresponding to the user's question into a trained first neural network model; (Wu, 0030, “In response to the search query, search engine 120 extracts one or more keywords (also referred to as search terms) [plurality of keywords] from the search query [user's question]. Search engine 120 performs a search in content database 133, which may include primary content database 130 and/or auxiliary content database 131, to identify a list of content items that are related to the keywords.” And paragraph 0038, “In response to a search query received from a client device of a user such as client device 101, the search query is fed into each of the models [trained first neural network model] 115. Each of models 115 provides an indicator indicating a likelihood the user is associated with a predetermined category [importance values, examiner would like to point out that the indicator of likelihood is interpreted as the importance value because it shows if the search query will be in a specific category based on indication like the importance value.] corresponding to that particular model. In other words, each of models 115 predicts based on the search query whether the user is likely interested in a particular category of information associated with that particular model.”) Above, Wu teaches extracting keywords and the likelihood the user is associated with a predetermined category. Examiner would like to point out that the indicator of likelihood is interpreted as the importance value because it shows if the search query will be in a specific category based on indication like the importance value. identify at least one search word to be input into a search engine among the plurality of keywords by inputting the plurality of keywords and the importance values to a trained second neural network model; and (Wu, paragraph 0030, “In response to the search query, search engine 120 extracts one or more keywords (also referred to as search terms) from the search query [one search word to be input to a search engine among the plurality of keywords]. Search engine 120 performs a search in content database 133, which may include primary content database 130 and/or auxiliary content database 131, to identify a list of content items that are related to the keywords. Primary content database 130 (also referred to as a master content database) may be a general content database, while auxiliary content database 131 (also referred to as a secondary content database) may be a special content database. Search engine 120 returns a search result page having at least some of the content items in the list to client device 101 to be presented therein.” And paragraph 0040, “a neural network [a trained second neural network model;] model can be used to rank content items of a search result based on the features associated with content items of the search results and user information (e.g., user profile, user device information) of the user. The content items are then sorted based on the rankings [the importance values] that the user is more likely interested in receiving. Furthermore, a neural network model can be used to determine whether a user interaction of a user with a particular content item has occurred (e.g., whether the user has clicked on that particular content item presented to the user) based on the features associated with the user and the content item.“) Above, Wu teaches keywords being extracted from search queries and then the keywords being ranked by a neural network. provide an answer to the user's question based on the at least one search word; and (Wu, paragraph 0044, “Output 502 for each of the content items may be used to rank the content item. A search result may include a list of content item ranked and sorted based on the classification. Note that NN classification engine 510 and NN training engine 410 may be the same engine to train a NN model and classify features using the NN model.”) Above, Wu teaches outputting content based on the search of the user. As for the newly amended limitation of claim 1, Srinivasan teaches the limitation as seen below. Srinivasan teaches the database comprising documents that are being interpreted as the questions and answers. Also, the frequency of keywords across there documents. Specifically: wherein the importance values are obtained based on a frequency at which each keyword of the plurality of keywords are used in a database comprising a plurality of questions and answers to the plurality of questions. (Srinivasan, paragraph 0017, “content repository can be a storage device or database configured to contain or host a plurality of documents 155. The content expansion system 100 can obtain input content 105 from a user (e.g., via a user interface) or, alternatively, be obtained from another external system or engine via an interface [database].” And paragraph 28, “After the one or more keywords are extracted from the input content 202, the query constructor then ranks those extracted keywords based on a "degree of importance" that is associated with each of the keywords to produce a list of ranked keywords. In various embodiments, a user-defined or system-defined set of top keywords may be used by the query constructor 204 to construct an input query 206 for retrieving relevant content from a repository, such as content repository 150 of FIG. 1. Identifying relevant content from the content repository can be based on a list of ranked keywords. The degree of importance associated with each keyword can be based on an importance score or criteria, for example an inverse document frequency (IDF) score for each of the keywords in the content repository. The importance score can take the form of a numerical statistic which reflects the importance of a word to a document or a set of documents.” and paragraph 61, “At block 620, the identified keywords can be scored based on an inverse document frequency score determined for each keyword in the repository. In embodiments, keywords can be identified based on a determined importance of a term, for instance by a query constructor 110 of FIG. 1. In various embodiments, keyword importance can be determined by term frequency in one or more documents in a repository, or by inverse document frequency across a set of documents in the repository. In various embodiments, keyword importance can be determined utilizing machine learning processes, static importance lists, presence in the input content, recurrence in the input content, recurrence in a repository, a determined inverse document frequency score, or any combination thereof.”) As for the amendment made to claim 5, newly cited prior art Du et al (Text Classification Research with Attention-based Recurrent Neural Networks, “Du”) in combination with Wu and Srinivasan are used in the teaching of the amendment. Specifically: configured to determine an attention weight for input and output a position corresponding to an input column by applying a recurrent neural network (RNN) using an attention mechanism. (Du, pg. 5, paragraph 3, “We first introduce the recurrent neural network model, and then describe it in detail on the basis of this model to increase the structure of the attention mechanism; the second part is the classifier, the classifier has a dropout [14] layer and softmax layer composition. The biggest advantage of this model is that only simple preprocessing of text is required, you can use the attention mechanism to select keywords and learn the text of the feature representation.”) Du teaches a recurrent neural network using an attention mechanism to determine the keywords or text being interpreted as the weight of the input and output. Therefore, the 35 USC 103 rejection is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-16 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1: Is the claim to a process, machine, manufacture or composition of matter? Claim 1 is directed to a machine. Step 1: yes. Step 2A, prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon? obtain a plurality of keywords related to the user's question and importance values for the respective keywords (limitation is directed to a mental process, One can mentally prepare a plurality of keywords by use of pen and paper with respect to user’s questions and importance values of the words.) identify at least one search word to be input to a search engine among the plurality of keywords (limitation is directed to a mental process, One can mentally identify a search word for input by use of pen and paper with respect to the keywords.) wherein the importance values are obtained based on a frequency at which each keyword of the plurality of keywords are used in (limitation is directed to a mental process, One can mentally obtain the importance values by use of pen and paper with respect to the frequency of the keyword). Step 2A, prong 1: If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea. Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? memory storing at least one instruction; and (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)). a processor configured to execute the at least one instruction, wherein the processor, by executing the at least one instruction (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)). obtain text information corresponding to a user's question; (e.g., insignificantly extra solution activity of mere data gathering or data output), see MPEP 2106.05(g)). by inputting the text information corresponding to the user's question to a trained first neural network model; (e.g., insignificantly extra solution activity of mere data gathering or data output), see MPEP 2106.05(g))., reciting a neural network model as a destination of the input data without explanation of how the model obtains the keywords/values is a mere instruction to apply the exception (MPEP 2106.05(f)) which does not meaningfully limit the process of obtaining keywords and importance values). by inputting the plurality of keywords and the importance values to a trained second neural network model; (e.g., insignificantly extra solution activity of mere data gathering or data output), see MPEP 2106.05(g)), reciting a trained neural network model as the destination of the input data without explanation of how the model identifies the search words is a mere instruction to apply the exception (MPEP 2106.05(f)) which does not meaningfully limit the process of identifying a search word). provide an answer to the user's question based on the at least one identified search word. (e.g., insignificantly extra solution activity of mere data gathering or data output), see MPEP 2106.05(g)). a database comprising a plurality of questions and answers to the plurality of questions. (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)). Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? memory storing at least one instruction; and (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)). a processor configured to execute the at least one instruction, wherein the processor, by executing the at least one instruction (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)). obtain text information corresponding to a user's question; (receiving or transmitting data in a machine learning network, using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine and conventional activities in the field of computer functions, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (see MPEP 2106.05(d)(II)(i)). by inputting the text information corresponding to the user's question to a trained first neural network model; (receiving or transmitting data in a machine learning network, using components and functions claimed at a high level of generality have been determined by the courts as being well- understood, routine and conventional activities in the field of computer functions, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (see MPEP 2106.05(d)(II)(i)) by inputting the plurality of keywords and the importance values to a trained second neural network model; (receiving or transmitting data in a machine learning network, using components and functions claimed at a high level of generality have been determined by the courts as being well- understood, routine and conventional activities in the field of computer functions e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362, (see MPEP 2106.05(d)(II)(i), reciting a trained neural network model as the destination of the input data without explanation of how the model identifies the search words is a mere instruction to apply the exception (MPEP 2106.05(f)) or a link to a particular technological environment (MPEP 2106.05(h)) which does not meaningfully limit the process of identifying a search word.) provide an answer to the user's question based on the at least one identified search word. (receiving or transmitting data in a machine learning network, using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine and conventional activities in the field of computer functions e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362, (see MPEP 2106.05(d)(II)(i)) a database comprising a plurality of questions and answers to the plurality of questions. (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)). Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 2 and analogous claim 12, Claim 2 incorporates the analysis of the machine of claim 1. Step 2A, Prong 2/Step 2B: wherein the first neural network model is configured to (e.g., generally linking (MPEP 2106.05(h)). obtain the plurality of keywords and the importance values based on a database, and (limitation is directed to a mental process, but for the recitation of a generic computer component i.e., a database, One can mentally prepare a plurality of keywords by use of pen and paper with respect to user’s questions and answers). wherein the plurality of keywords comprise a first word included in the text information corresponding to the user's question and a second word not included in the text information corresponding to the user's question. (limitation is directed to a mental process, One can mentally identify a first and second word by use of pen and paper with respect to user’s question, that is not in text information). Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 3 and analogous claim 13, Claim 3 incorporates the analysis of the machine of claim 2. Step 2A, Prong 2/Step 2B: wherein the second word is a word positioned within a predetermined distance from the first word among the plurality of words included in the database. (e.g., mere instructions to apply the judicial exception using generic computer components, (See MPEP 2106.05(f)). Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 4 and analogous claim 14, Claim 4 incorporates the analysis of the machine of claim 1. identify a number of keywords to be included in the at least one search word among the plurality of keywords; and (limitation is directed to a mental process, One can mentally identify a plurality of keywords by use of pen and paper with respect to the search word). identify the at least one keyword among the plurality of keywords according to the identified number. (limitation is directed to a mental process, One can mentally prepare a identify at least one keyword by use of pen and paper with respect to the identified number). Regarding claim 5, Claim 5 incorporates the analysis of the machine of claim 4. Step 2A, prong 1: arrange the plurality of keywords according to an order of the importance values; and (limitation is directed to a mental process, One can mentally arrange the plurality of keywords by use of pen and paper with respect to the order of importance). identify the number of keywords by identifying a keyword having a lowest importance value among at least one keyword to be included in the at least one search word (limitation is directed to a mental process, One can mentally identify the number of keywords by use of pen and paper with respect to the order of importance). configured to determine an attention weight for input and output a position corresponding to an input column by applying a recurrent neural network (RNN) using an attention mechanism. (limitation is directed to a mental process, One can mentally determine weight by use of pen and paper with respect input and output positions). Step 2A, prong 2/Step 2B: a pointer network included in the second neural network model. (a mere instruction to apply the exception (MPEP 2106.05(f)) or a link to a particular technological environment (MPEP 2106.05(h)) Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 6, Claim 6 incorporates the analysis of the machine of claim 1. Step 2A, Prong 2/Step 2B: wherein the processor is configured to, based on a voice signal corresponding to the user's question being received through the microphone, obtain text information corresponding to the user's question based on the voice signal. (e.g., mere instructions to apply the judicial exception using generic computer components, see MPEP 2106.05(f)) Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 7, Claim 7 incorporates the analysis of the machine of claim 1. Step 2A, prong 2 and Step 2B: a communicator comprising circuitry, wherein the processor is configured to(e.g., mere instructions to apply the judicial exception using generic computer components, see MPEP 2106.05(f)) control the communicator to transmit information on the at least one identified search word (e.g., mere instructions to apply using generic computer components, see MPEP 2106.05(f)) receive a search result for the at least one identified search word from the server via the communicator; and (e.g., insignificant extra solution activity of mere data gathering or data output), see MPEP 2106.05(g), using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine and conventional activities in the field of computer functions (see MPEP 2106.05(d)(II)(i)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362) provide an answer to the user's question based on the received search result. (e.g., insignificant extra solution activity of mere data gathering or data output), see MPEP 2106.05(g), using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine and conventional activities in the field of computer functions (see MPEP 2106.05(d)(II)(i)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362) Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 8, Claim 8 incorporates the analysis of the machine of claim 7. Step 2A, prong 2 and Step 2B: wherein at least one of the first neural network model and the second neural network model is trained based on the received search result. (e.g., mere instructions to apply using generic computer components, see MPEP 2106.05(f)) Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 9, Claim 9 incorporates the analysis of the machine of claim 8. Step 2A, prong 2 and Step 2B: wherein the search result comprises a plurality of documents arranged according to a search order, and (e.g., insignificant extra solution activity of mere data gathering or data output), see MPEP 2106.05(g), using components and functions claimed at a high level of generality have been determined by the courts as being well-understood, routine and conventional activities in the field of computer functions (see MPEP 2106.05(d)(II)(i)). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362) wherein at least one of the first neural network model and the second neural network model is subjected to reinforcement learning based on whether the answer to the user's question is included in at least one document of the plurality of documents, and the search order of the at least one document. (e.g., mere instructions to apply using generic computer components, see MPEP 2106.05(f)) Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 10, Claim 10 incorporates the analysis of the machine of claim 9. Step 2A, prong 2 and Step 2B: wherein an entire pipeline of the first neural network model and the second neural network model is subjected to reinforcement learning by an end-to-end method. (e.g., mere instructions to apply using generic computer components, see MPEP 2106.05(f)) Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 16, Claim 16 incorporates the analysis of the machine of claim 1. Step 2A, prong 2 and Step 2B: wherein the first neural network model is trained to output the plurality of keywords and the importance values so that an answer to a user's question is included in a plurality of documents and included in a document with a higher search order among the plurality of documents, and (e.g., mere instructions to apply using generic computer components, see MPEP 2106.05(f)) wherein the second neural network model is trained to output the at least one search word so that the answer to the user's question is included in the plurality of documents and included in the document with the higher search order among the plurality of documents. (e.g., mere instructions to apply using generic computer components, see MPEP 2106.05(f)) Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 11, Step 1: Is the claim to a process, machine, manufacture or composition of matter? Claim 11 is directed to a process. Step 1: yes The rest of the analysis for claim 11 is analogous to claim 1. Regarding claim 15, Step 1: Is the claim to a process, machine, manufacture or composition of matter? Claim 15 is directed to a manufacture. Step 1: yes The rest of the analysis for claim 15 is analogous to claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4 and 6-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US Published Patent Application No. 20170323199), in view of Srinivasan et al (US Published Patent Application No. 20180060287, “Srinivasan”). In regard to claim 1, Wu teaches a memory storing at least one instruction; and (Wu, paragraph 0055, “In one embodiment, system 1500 includes processor 1501, memory 1503, and devices 1505-1508 via a bus or an interconnect 1510.” And paragraph 0056, “Processor 1501 is configured to execute instructions for performing the operations and steps discussed herein.”) a processor configured to execute the at least one instruction, wherein the processor, by executing the at least one instruction, is configured to: (Wu, paragraph 0055, “In one embodiment, system 1500 includes processor 1501, memory 1503, and devices 1505-1508 via a bus or an interconnect 1510.” And paragraph 0056, “Processor 1501 is configured to execute instructions for performing the operations and steps discussed herein.”) obtain text information corresponding to a user's question; (Wu, paragraph 0033, “The search engine looks for the words or phrases exactly as entered [obtain text information]. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for b[to a user's question]. As well, natural language queries allow the user to type a question in the same form one would ask it to a human.”) obtain a plurality of keywords related to the user's question and importance values for respective keywords by inputting the text information corresponding to the user's question into a trained first neural network model; (Wu, 0030, “In response to the search query, search engine 120 extracts one or more keywords (also referred to as search terms) [plurality of keywords] from the search query [user's question]. Search engine 120 performs a search in content database 133, which may include primary content database 130 and/or auxiliary content database 131, to identify a list of content items that are related to the keywords.” And paragraph 0038, “In response to a search query received from a client device of a user such as client device 101, the search query is fed into each of the models [trained first neural network model] 115. Each of models 115 provides an indicator indicating a likelihood the user is associated with a predetermined category [importance values, examiner would like to point out that the indicator of likelihood is interpreted as the importance value because it shows if the search query will be in a specific category based on indication like the importance value.] corresponding to that particular model. In other words, each of models 115 predicts based on the search query whether the user is likely interested in a particular category of information associated with that particular model.”) identify at least one search word to be input into a search engine among the plurality of keywords by inputting the plurality of keywords and the importance values to a trained second neural network model; and (Wu, paragraph 0030, “In response to the search query, search engine 120 extracts one or more keywords (also referred to as search terms) from the search query [one search word to be input to a search engine among the plurality of keywords]. Search engine 120 performs a search in content database 133, which may include primary content database 130 and/or auxiliary content database 131, to identify a list of content items that are related to the keywords. Primary content database 130 (also referred to as a master content database) may be a general content database, while auxiliary content database 131 (also referred to as a secondary content database) may be a special content database. Search engine 120 returns a search result page having at least some of the content items in the list to client device 101 to be presented therein.” And paragraph 0040, “a neural network [a trained second neural network model;] model can be used to rank content items of a search result based on the features associated with content items of the search results and user information (e.g., user profile, user device information) of the user. The content items are then sorted based on the rankings [the importance values] that the user is more likely interested in receiving. Furthermore, a neural network model can be used to determine whether a user interaction of a user with a particular content item has occurred (e.g., whether the user has clicked on that particular content item presented to the user) based on the features associated with the user and the content item.“) provide an answer to the user's question based on the at least one search word; and (Wu, paragraph 0044, “Output 502 for each of the content items may be used to rank the content item. A search result may include a list of content item ranked and sorted based on the classification. Note that NN classification engine 510 and NN training engine 410 may be the same engine to train a NN model and classify features using the NN model.”) However, Wu does not explicitly teach and wherein the importance values are obtained based on a frequency at which each keyword of the plurality of keywords are used in a database comprising a plurality of questions and answers to the plurality of questions. Srinivasan teaches wherein the importance values are obtained based on a frequency at which each keyword of the plurality of keywords are used in a database comprising a plurality of questions and answers to the plurality of questions. (Srinivasan, paragraph 0017, “content repository can be a storage device or database configured to contain or host a plurality of documents 155. The content expansion system 100 can obtain input content 105 from a user (e.g., via a user interface) or, alternatively, be obtained from another external system or engine via an interface [database].” And paragraph 28, “After the one or more keywords are extracted from the input content 202, the query constructor then ranks those extracted keywords based on a "degree of importance" that is associated with each of the keywords to produce a list of ranked keywords. In various embodiments, a user-defined or system-defined set of top keywords may be used by the query constructor 204 to construct an input query 206 for retrieving relevant content from a repository, such as content repository 150 of FIG. 1. Identifying relevant content from the content repository can be based on a list of ranked keywords. The degree of importance associated with each keyword can be based on an importance score or criteria, for example an inverse document frequency (IDF) score for each of the keywords in the content repository. The importance score can take the form of a numerical statistic which reflects the importance of a word to a document or a set of documents.” and paragraph 61, “At block 620, the identified keywords can be scored based on an inverse document frequency score determined for each keyword in the repository. In embodiments, keywords can be identified based on a determined importance of a term, for instance by a query constructor 110 of FIG. 1. In various embodiments, keyword importance can be determined by term frequency in one or more documents in a repository, or by inverse document frequency across a set of documents in the repository. In various embodiments, keyword importance can be determined utilizing machine learning processes, static importance lists, presence in the input content, recurrence in the input content, recurrence in a repository, a determined inverse document frequency score, or any combination thereof.”) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Wu and Srinivasan before them, to include Srinivasan’s content expansion in Wu’s system of information retrieval. One would have been motivated to make such a combination in order to construct an input query for retrieving relevant content. (Srinivasan, paragraph 0028, “In various embodiments, a user-defined or system-defined set of top keywords may be used by the query constructor 204 to construct an input query 206 for retrieving relevant content from a repository, such as content repository 150 of FIG. 1. “) In regard to claim 11, the claim recites similar limitations as corresponding claim 1, and is rejected for similar reasons as claim 1 using similar teachings and rationale. In regard to claim 15, the claim recites similar limitations as corresponding claim 1, and is rejected for similar reasons as claim 1 using similar teachings and rationale. In regard to claim 2 and analogous claim 12, Wu and Srinivasan teach the device of claim 1. Wu further teaches wherein the trained first neural network model is configured to obtain the plurality of keywords and the importance values based on the database and (Wu, 0030, “In response to the search query, search engine 120 extracts one or more keywords (also referred to as search terms) from the search query. Search engine 120 performs a search in content database 133, which may include primary content database 130 and/or auxiliary content database 131, to identify a list of content items that are related to the keywords.” And paragraph 0025, “According to one embodiment, when a search query is received from a user device of a user, a first set of features are determined based on the search query, the user device, the user, as well as other related information (e.g., history log, etc.). A bloom filter of a neural network model [the first neural network model is configured to obtain the plurality of keywords] is applied to the first set of features to generate a second set of features. The second set of features are then fed to a neural network model of a particular category to derive an output value representing a likelihood (e.g., probability) [the importance values based on a database] that the user is associated with that particular category. A search is then conducted in a content database based on the search query and the user category of the user, such that better content can be served to target the user.”) wherein the plurality of keywords comprise a first word included in the text information corresponding to the user's question and a second word not included in the text information corresponding to the user's question. (Wu, paragraph 0024 “A bloom filter is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not, thus a bloom filter has a 100% recall rate. In other words, a query returns either "possibly in set" [a first word included in the text information corresponding to the user's question] or "definitely not in set" [a second word not included in the text information corresponding to the user's question.].”) In regard to claim 3 and analogous claim 13, Wu and Srinivasan teach the device of claim 2. Wu further teaches wherein the second word includes a word positioned within a predetermined distance from the first word among a plurality of words included in the database. (Wu, paragraph 0033, “The index is built from the information stored with the data and the method by which the information is indexed [included in the database]. The search engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords [a word positioned within a predetermined distance from the first word among the plurality of words]. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for.”) In regard to claim 4 and analogous claim 14, Wu and Srinivasan teach the device of claim 1. However, Wu does not explicitly teach identify a number of keywords to be included in the at least one search word among the plurality of keywords; and identify the at least one search word among the plurality of keywords according to the identified number. Srinivasan teaches identify a number of keywords to be included in the at least one search word among the plurality of keywords; and (Srinivasan, paragraph 28, “After the one or more keywords are extracted from the input content 202, the query constructor then ranks those extracted keywords based on a "degree of importance" that is associated with each of the keywords to produce a list of ranked keywords. In various embodiments, a user-defined or system-defined set of top keywords may be used by the query constructor 204 to construct an input query 206 for retrieving relevant content from a repository, such as content repository 150 of FIG. 1. Identifying relevant content from the content repository can be based on a list of ranked keywords. The degree of importance associated with each keyword can be based on an importance score or criteria, for example an inverse document frequency (IDF) score for each of the keywords in the content repository. The importance score can take the form of a numerical statistic which reflects the importance of a word to a document or a set of documents.” and paragraph 61, “At block 620, the identified keywords can be scored based on an inverse document frequency score determined for each keyword in the repository. In embodiments, keywords can be identified based on a determined importance of a term, for instance by a query constructor 110 of FIG. 1. In various embodiments, keyword importance can be determined by term frequency in one or more documents in a repository, or by inverse document frequency across a set of documents in the repository. In various embodiments, keyword importance can be determined utilizing machine learning processes, static importance lists, presence in the input content, recurrence in the input content, recurrence in a repository, a determined inverse document frequency score, or any combination thereof.”) identify the at least one search word among the plurality of keywords according to the identified number. (Srinivasan, paragraph 28, “After the one or more keywords are extracted from the input content 202, the query constructor then ranks those extracted keywords based on a "degree of importance" that is associated with each of the keywords to produce a list of ranked keywords. In various embodiments, a user-defined or system-defined set of top keywords may be used by the query constructor 204 to construct an input query 206 for retrieving relevant content from a repository, such as content repository 150 of FIG. 1. Identifying relevant content from the content repository can be based on a list of ranked keywords. The degree of importance associated with each keyword can be based on an importance score or criteria, for example an inverse document frequency (IDF) score for each of the keywords in the content repository. The importance score can take the form of a numerical statistic which reflects the importance of a word to a document or a set of documents.” and paragraph 61, “At block 620, the identified keywords can be scored based on an inverse document frequency score determined for each keyword in the repository. In embodiments, keywords can be identified based on a determined importance of a term, for instance by a query constructor 110 of FIG. 1. In various embodiments, keyword importance can be determined by term frequency in one or more documents in a repository, or by inverse document frequency across a set of documents in the repository. In various embodiments, keyword importance can be determined utilizing machine learning processes, static importance lists, presence in the input content, recurrence in the input content, recurrence in a repository, a determined inverse document frequency score, or any combination thereof.”) Wu and Srinivasan are combinable for the same rationale as set forth above with respect to claim 1. In regard to claim 6, Wu and Srinivasan teach the device of claim 1. Wu further teaches a microphone, (Wu, paragraph 0060, “IO devices 1507 may include an audio device. A audio device may include a speaker and/or a microphone [a microphone] to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions.”) wherein the processor is configured to: based on a voice signal corresponding to the user's question being received through the microphone, obtain the text information corresponding to the user's question based on the voice signal. (Wu, paragraph 0034, “Referring back to FIG. 3A, according to one embodiment, in response to a search query received at server 104 from a client device, in this example, client device 101, search engine 120 performs a search in content database 133, such as primary content database 130 and/or auxiliary content database 131, to generate a list of content items [obtain text information corresponding to the user's question based on the voice signal.].” and paragraph 0060, “IO devices 1507 may include an audio device. An audio device may include a speaker and/or a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions. [based on a voice signal corresponding to the user's question being received through the microphone,]”) In regard to claim 7, Wu and Srinivasan teach the device of claim 1. Wu further teaches a communicator comprising circuitry, wherein the processor is configured to: control the communicator to transmit information on the at least one search word to a server for providing the search engine; (Wu, paragraph 0030, “For example, a client, in this example, a user application of client device 101 (e.g., Web browser, mobile application), may send a search query to server 104 and the search query is received by search engine 120 via the interface over network 103.” And paragraph 0050, “Process 800 may be performed by processing logic that includes hardware ( e.g. circuitry, dedicated logic, etc.), software (e.g., embodied on a non-transitory computer readable medium), or a combination thereof.”) receive a search result for the at least one search word from the server via the communicator; and (Wu, paragraph 0030, “In response to the search query, search engine 120 extracts one or more keywords (also referred to as search terms) from the search query. Search engine 120 performs a search in content database 133, which may include primary content database 130 and/or auxiliary content database 131, to identify a list of content items that are related to the keywords.”) provide an answer to the user's question based on the search result. (Wu, paragraph 0030, “Search engine 120 returns a search result page having at least some of the content items in the list to client device 101 to be presented therein.”) In regard to claim 8, Wu and Srinivasan teach the device of claim 7. Wu further teaches wherein at least one of the trained first neural network model and the second neural network model is trained based on the received search result. (Wu, paragraph 0044, “NN classification engine 510 can classify the content items using NN model 415. Output 502 may be a score representing a likelihood that a particular content item is interesting to a particular user. Output 502 for each of the content items may be used to rank the content item. A search result may include a list of content item ranked and sorted based on the classification. Note that NN classification engine 510 and NN training engine 410 may be the same engine to train a NN model and classify features using the NN model [trained based on the received search result].”) In regard to claim 9, Wu and Srinivasan teach the device of claim 8. Wu further teaches wherein the search result comprises a plurality of documents arranged according to a search order, and (Wu, paragraph 0026, “Alternatively, a neural network model can be used to rank content items of a search result [plurality of documents arranged according to a search order] based on the features associated with content items of the search results and user information (e.g., user profile, user device information) of the user. The content items are then sorted based on the rankings that the user is more likely interested in receiving.”) wherein at least one of the trained first neural network model and the second neural network model is subjected to reinforcement learning based on whether the answer to the user's question is included in at least one document of the plurality of documents, and the search order of the at least one document. (Wu, paragraph 0042, “If it is determined the NN model 415 does not satisfy a predetermined requirement, one or more parameters of bloom filter 411 may be adjusted and NN model 415 is retrained, [subjected to reinforcement learning based on whether the answer to the user's question is included in at least one document of the plurality of documents,] until the NN model 415 satisfies the predetermined requirement.”) In regard to claim 10, Wu and Srinivasan teach the device of claim 9. Wu further teaches wherein an entire pipeline of the trained first neural network model or the trained second neural network model is subjected to reinforcement learning by an end-to-end method. (Wu, paragraph 0042, “If it is determined the NN model 415 does not satisfy a predetermined requirement, one or more parameters of bloom filter 411 may be adjusted and NN model 415 is retrained [subjected to reinforcement learning by an end-to-end method], until the NN model 415 satisfies the predetermined requirement.”) In regard to claim 16, Wu and Srinivasan teach the device of claim 1. Wu further teaches neural network model is trained to output the plurality of keywords and the importance values so that an answer to a user's question is included in a plurality of documents and included in a document with a higher search order among the plurality of documents, and (Wu, paragraph 0030, “In response to the search query, search engine 120 extracts one or more keywords (also referred to as search terms) from the search query [a plurality of questions and answers to the plurality of questions,]. Search engine 120 performs a search in content database 133, which may include primary content database [a higher search order] 130 and/or auxiliary content database 131, to identify a list of content items that are related to the keywords.” And paragraph 0025, “According to one embodiment, when a search query is received from a user device of a user, a first set of features are determined based on the search query, the user device, the user, as well as other related information (e.g., history log, etc.). A bloom filter of a neural network model [the first neural network model is configured to output the plurality of keywords] is applied to the first set of features to generate a second set of features. The second set of features are then fed to a neural network model of a particular category to derive an output value representing a likelihood (e.g., probability) [the importance values] that the user is associated with that particular category. A search is then conducted in a content database based on the search query and the user category of the user, such that better content can be served to target the user.”) wherein the second neural network model is trained to output the at least one search word so that the answer to the user's question is included in the plurality of documents and included in the document with the higher search order among the plurality of documents. (Wu, paragraph 0046, “At block 603, the second set of features are provided to a neural network model being trained. The second set of features may be fed to the visible layer of nodes of the neural network model, where the neural network model may include one or more hidden layers of nodes. An output is generated. At block 604, processing logic determines whether the output satisfies a predetermined condition or a target value (e.g., probability) that was set for the neural network model. If it is determined the output does not satisfy the predetermined condition or target, at block 605, certain parameters of the bloom filter and/or the neural network model may be adjusted, and the above operations may be iteratively performed to fine tune the bloom filter and/or the neural network model.”) Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Wu, in view of Srinivasan, in further view of Du et al (Text Classification Research with Attention-based Recurrent Neural Networks, "Du"). In regard to claim 5, Wu and Srinivasan teach the device of claim 4. Srinivasan further teaches arrange the plurality of keywords according to an order of the importance values; and (Srinivasan, paragraph 28, “After the one or more keywords are extracted from the input content 202, the query constructor then ranks those extracted keywords based on a "degree of importance" that is associated with each of the keywords to produce a list of ranked keywords. In various embodiments, a user-defined or system-defined set of top keywords may be used by the query constructor 204 to construct an input query 206 for retrieving relevant content from a repository, such as content repository 150 of FIG. 1. Identifying relevant content from the content repository can be based on a list of ranked keywords. The degree of importance associated with each keyword can be based on an importance score or criteria, for example an inverse document frequency (IDF) score for each of the keywords in the content repository. The importance score can take the form of a numerical statistic which reflects the importance of a word to a document or a set of documents.” and paragraph 61, “At block 620, the identified keywords can be scored based on an inverse document frequency score determined for each keyword in the repository. In embodiments, keywords can be identified based on a determined importance of a term, for instance by a query constructor 110 of FIG. 1. In various embodiments, keyword importance can be determined by term frequency in one or more documents in a repository, or by inverse document frequency across a set of documents in the repository. In various embodiments, keyword importance can be determined utilizing machine learning processes, static importance lists, presence in the input content, recurrence in the input content, recurrence in a repository, a determined inverse document frequency score, or any combination thereof.”) identify the number of keywords by identifying a keyword having a lowest importance value among at least one keyword to be included in the at least one search word through a pointer network included in the trained second neural network model (Srinivasan, paragraph 28, “After the one or more keywords are extracted from the input content 202, the query constructor then ranks those extracted keywords based on a "degree of importance" that is associated with each of the keywords to produce a list of ranked keywords. In various embodiments, a user-defined or system-defined set of top keywords may be used by the query constructor 204 to construct an input query 206 for retrieving relevant content from a repository, such as content repository 150 of FIG. 1. Identifying relevant content from the content repository can be based on a list of ranked keywords. The degree of importance associated with each keyword can be based on an importance score or criteria, for example an inverse document frequency (IDF) score for each of the keywords in the content repository. The importance score can take the form of a numerical statistic which reflects the importance of a word to a document or a set of documents.” And paragraph 0029, “It will be appreciated that with respect to an IDF score, the more often a term appears in an index, the less relevant it becomes, and further terms that appear in many documents will have a lower weight than more uncommon terms [identifying a keyword having a lowest importance value]. Utilizing one or more of the keywords from the list of ranked keywords, an input query 206 can be constructed by the query constructor 204 and utilized to identify and obtain relevant content from the content repository. “ and paragraph 61, “At block 620, the identified keywords can be scored based on an inverse document frequency score determined for each keyword in the repository. In embodiments, keywords can be identified based on a determined importance of a term, for instance by a query constructor 110 of FIG. 1. In various embodiments, keyword importance can be determined by term frequency in one or more documents in a repository, or by inverse document frequency across a set of documents in the repository. In various embodiments, keyword importance can be determined utilizing machine learning processes, static importance lists, presence in the input content, recurrence in the input content, recurrence in a repository, a determined inverse document frequency score, or any combination thereof.”) Wu and Srinivasan are combinable for the same rationale as set forth above with respect to claim 1. However, Wu and Srinivasan do not explicitly teach configured to determine an attention weight for input and output a position corresponding to an input column by applying a recurrent neural network (RNN) using an attention mechanism. Du teaches configured to determine an attention weight for input and output a position corresponding to an input column by applying a recurrent neural network (RNN) using an attention mechanism. (Du, pg. 5, paragraph 3, “We first introduce the recurrent neural network model, and then describe it in detail on the basis of this model to increase the structure of the attention mechanism; the second part is the classifier, the classifier has a dropout [14] layer and softmax layer composition. The biggest advantage of this model is that only simple preprocessing of text is required, you can use the attention mechanism to select keywords and learn the text of the feature representation.”) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Wu, Srinivasan and Du before them, to include Du’s text classification in Wu and Srinivasan’s system of information retrieval. One would have been motivated to make such a combination in order to pay more attention to keywords in the classification. (Du, abstract, “Therefore, the representation of texts not only considers all words, but also pays more attention to key words.”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SKYLAR K VANWORMER whose telephone number is (703)756-1571. The examiner can normally be reached M-F 6:00am to 3:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached on (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.K.V./ Examiner, Art Unit 2146 /USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Oct 29, 2021
Application Filed
Apr 01, 2025
Non-Final Rejection — §101, §103
Jun 09, 2025
Applicant Interview (Telephonic)
Jun 10, 2025
Examiner Interview Summary
Jun 23, 2025
Response Filed
Sep 29, 2025
Final Rejection — §101, §103
Nov 28, 2025
Response after Non-Final Action
Dec 23, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Mar 27, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591789
Knowledge distillation in multi-arm bandit, neural network models for real-time online optimization
2y 5m to grant Granted Mar 31, 2026
Patent 12541680
REDUCED COMPUTATION REAL TIME RECURRENT LEARNING
2y 5m to grant Granted Feb 03, 2026
Patent 12524655
ARTIFICIAL NEURAL NETWORK PROCESSING METHODS AND SYSTEM
2y 5m to grant Granted Jan 13, 2026
Patent 12511554
Complex System for End-to-End Causal Inference
2y 5m to grant Granted Dec 30, 2025
Patent 12505358
Methods and Systems for Approximating Embeddings of Out-Of-Knowledge-Graph Entities for Link Prediction in Knowledge Graph
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
39%
Grant Probability
62%
With Interview (+22.5%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month