Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Amendments
This action is in response to amendments filed October 3rd, 2025, in which Claims 1, 7-8, & 14-15 have been amended. No claims have been cancelled or added, and Claims 1-20 are currently pending.
Response to Arguments
Regarding the claim objections of the previous office action, claims 1, 8, and 15 have been amended accordingly to remove the minor informalities. Therefore, the claim objections have been overcome and subsequently withdrawn.
Regarding the 35 U.S.C. 112(b) rejections of the previous office action, claims 7 and 14 have been amended accordingly to remove the source of confusion therein. Therefore, the claim rejections under 35 U.S.C. 112(b) have been overcome and subsequently withdrawn.
Regarding the applicant’s traversal of the 35 U.S.C. 101 rejections of the previous office action, the applicant’s arguments filed October 8th, 2025 have been fully considered, and are unpersuasive.
Applicant asserts that using a pre-trained language model to respond to a word-based question using only words associated with a selected layer of a hierarchical taxonomy cannot be practically performed in the human mind, further asserting that the standard under the law for a rejection under 35 U.S.C. 101 is not whether claim elements can be performed in the human mind, but whether they can be practically performed in the human mind.
The examiner respectfully asserts that the claim limitation being discussed, “responding to the word-based question using only words associated with the selected at least one layer of the at least two layers of the hierarchical taxonomy”, was never found to recite a mental process in the previous office action. It was found to recite the insignificant extra solution activity of mere data output, as cited in the MPEP 2106.05(g).
The only limitation in the claim which was found to recite an abstract idea was “selecting at least one layer of the hierarchical taxonomy, wherein the hierarchical taxonomy comprises at least two layers, each of the at least two layers including respective words resulting in the at least two layers having varying levels of complexity”, which can be done practically in the human mind. One can practically evaluate at least two layers that contain respective words with varying levels of complexity and make a judgement to select at least one of them. This is the only abstract idea identified in claim 1.
It is the procedure under the 101 analysis, that when an abstract idea is identified, such as the one cited above, the remainder of the limitations in the claim, and the claim as a whole, must be evaluated to determine if they integrate the abstract idea into a practical application. This is why the aforementioned response limitation was analyzed and found to recite an insignificant extra-solution activity, which as cited above, does not provide evidence of integration into a practical application or significantly more.
Further, the applicant asserts that in light of the USPTO’s “Reminders on evaluating subject matter eligibility under 35 U.S.C. 101” dated August 4, 2025, the present application merely “involves” an exception rather than “reciting” it. The examiner respectfully submits that the example given in this memorandum, example 39, which merely involves a judicial exception recites “training the neural network in a first stage using the first training set” which involves techniques such as mathematical concepts but does not explicitly recite them. This is what is meant by “involving” rather than “reciting”. Claim 1 of the present application explicitly does recite an abstract idea with the limitation, “selecting at least one layer of the hierarchical taxonomy, wherein the hierarchical taxonomy comprises at least two layers, each of the at least two layers including respective words resulting in the at least two layers having varying levels of complexity”. Therefore, the remaining limitations must be relied upon to provide evidence of integration into a practical application.
Further, applicant asserts that the claims are directed to “the technical field of accurately extracting knowledge from stored content” which improve “the accuracy of a response from stored content”, further citing the table of figure 5 from the specification as evidence. While, as pointed out by the applicant, this figure does illustrate improved accuracy, the limitation “responding to the word-based question using only words associated with the selected at least one layer of the at least two layers of the hierarchical taxonomy” still seems to recite the extra-solution activity of data output, and the examiner respectfully asserts that this argument, at best, could put this limitation as generally linking the use of the judicial exception to a particular technological environment or field of use, which does not provide evidence of integration into a practical application (MPEP 2106.05(h)).
Further, the applicant asserts that the examiner alleges that because the applicant’s claim can be argued to contain abstract subject matter that the applicant’s claim(s) as a whole is abstract. The examiner respectfully disagrees with this assertion. Due to the single limitation reciting an abstract idea, under the 2-prong analysis for 35 U.S.C. 101, the remaining limitations had to be examined, and the claim as a whole, to find evidence of integration into a practical application. As described in the previous action and below, every limitation was analyzed under its own merits with citations to the MPEP for each, as to why each of them were not found to provide evidence of integration into a practical application.
Further, applicant argues that the claims recite improvements that reflect an improvement in the functioning of a computer or an improvement to other technology, asserting that “using a pre-trained language model, responding to the word-based question using only words associated with the selected at least one layer of the at least two layers of the hierarchical taxonomy” provides evidence of an improvement to the technology of comprehension-based question answering and content understanding, in light of [0041], [0053], figure 1, and figure 5 of the specification.
The examiner respectfully asserts that the above limitation only teaches the formulation of a response (data output) using specific layers, which is to say, merely isolating specific datasets to use to formulate that response. This is a generic link to the generic field of “ontology-based question answering (QA)”, which is a known subfield of comprehension-based question answering, which uses structured ontologies/taxonomies to answer questions instead of simply retrieving a list of relevant documents. Further, this cites another known ML technique, called “constrained decoding” which guides large language models to generate text adhering to specific rules, formats, or structures by restricting the possible next words at each step. As such, this does not provide evidence of integration into a practical application or of significantly more than the judicial exception. Therefore, the rejections under 35 U.S.C. 101 are maintained.
Examples:
Ontology-Based QA Example: https://kmi.open.ac.uk/publications/pdf/kmi-03-7.pdf
Constrained Decoding Example: https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/tutorials/Feature_Guide/Constrained_Decoding/README.html
Regarding the applicant’s traversal of the 35 U.S.C. 102/103 rejections of the previous office action, the applicant’s arguments filed October 8th, 2025 have been fully considered, and are unpersuasive.
The applicant asserts that FANG fails to teach or suggest every element of claim 1. Specifically, the applicant asserts that FANG fails to teach “selecting at least one layer of the hierarchical taxonomy, wherein the hierarchical taxonomy comprises at least two layers, each of the at least two layers including respective words resulting in the at least two layers having varying levels of complexity” and “using a pre-trained language model, responding to the word-based question using only words associated with the selected at least one layer of the at least two layers of the hierarchical taxonomy”. The examiner respectfully submits that these limitations, which have not been amended aside from grammatical corrections, are taught by FANG, under the same rationale as the previous action.
The applicant further asserts that [0034] of the specification is different from the teachings of FANG because the complexity of the words associated with the layer determine the level of complexity whereas in FANG, the number of words in the layer define the complexity of the layer. The examiner respectfully submits that this difference is not recited in the claims, and is therefore, not relevant to the claims. If it is the applicant’s intent to patent this distinction, then this distinction should be placed in the claims, and further search and consideration will commence based on that. The claim itself merely states that the layers have varying levels of complexity, and not that these levels of complexity are defined by the complexity of a single word within them, nor is the term, “levels of complexity” explicitly “defined and restricted” to this meaning anywhere in the specification.
Therefore, the 102 rejection for claim 1 is maintained. Likewise, independent claims 8 and 15 recite similar limitations and are thus maintained under the same rationale. The rejections for dependent claims 2-7, 9-14, & 16-20 are dependent upon these claims and rejected under their own merits as well, and thus, these rejections are also maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more.
Regarding claim 1, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A method for comprehension-based question answering using a hierarchical taxonomy”. A method is one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“selecting at least one layer of the hierarchical taxonomy, wherein the hierarchical taxonomy comprises at least two layers, each of the at least two layers including respective words resulting in the at least two layers having varying levels of complexity” (A person can mentally evaluate the at least two layers and make a judgement to select at least one of them (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“receiving a word-based question” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“using a pre-trained language model…” (Merely using a pre-trained language model constitutes mere instructions to apply the exception using a generic computer (MPEP 2106.05(g)).)
“responding to the word-based question using only words associated with the selected at least one layer of the at least two layers of the hierarchical taxonomy” (Adding insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional elements, (ii) and (iv), recite insignificant extra-solution activities. Further, elements (ii) and (iv) recite steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Additional element (iii) recites mere instructions to apply the exception using a generic computer, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 recites the following additional mental processes:
“after receiving the word-based question, associating the word-based question with a layer of the hierarchical taxonomy” (This is another mental process because a person can mentally evaluate the word-based question and make a mental judgement to associate it with a specific layer (MPEP 2106).)
“wherein the selecting at least one layer of the hierarchical taxonomy includes determining which layer of the at least two layers of the hierarchical taxonomy comprises a layer of complexity one level less than the layer of the hierarchical taxonomy associated with the word-based question” (This is another mental process because a person can mentally evaluate the layers of the hierarchical taxonomy and make a judgement to determine which of them comprises a layer of complexity one level less than the layer of the associated hierarchical taxonomy with the word-based question (MPEP 2106).)
Further, claim 2 recites “wherein, the word-based question is responded to by the pre-trained language model using only words associated with the layer of the at least two layers of the hierarchical taxonomy having the one less level of complexity” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g).) In step 2B, this recites transmitting/receiving data over a network, which the courts have found to be a well-understood, routine, and conventional activity (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 3, it is dependent upon claim 2, and thereby incorporates the limitations of, and corresponding analysis applied to claim 2. Further, claim 3 recites “wherein the associating is performed by a user via a graphical user input” (In step2A, prong 2, this recites using a computer as a tool to perform an abstract idea (MPEP 2106.05(f).) In step 2B, using a computer as a tool to perform an abstract idea is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 4, it is dependent upon claim 2, and thereby incorporates the limitations of, and corresponding analysis applied to claim 2. Further, claim 4 recites “wherein the associating is performed using a machine learning process” (In step 2A, prong 2 and Step 2B, this recites merely using a machine learning process, which constitutes as mere instructions to apply the exception using a generic computer (MPEP 2106.05(g).) In step 2B, mere instructions to apply the exception using a generic computer is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 5, it is dependent upon claim 2, and thereby incorporates the limitations of, and corresponding analysis applied to claim 2. Further, claim 5 recites “wherein the associating is performed using stored information associating questions with respective layers of at least one hierarchical taxonomy.” (In step 2A, prong 2 and Step 2B, this recites merely using a machine learning process, which constitutes as mere instructions to apply the exception using a generic computer (MPEP 2106.05(g).) In step 2B, mere instructions to apply the exception using a generic computer is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 6, it is dependent upon claim 2, and thereby incorporates the limitations of, and corresponding analysis applied to claim 2. Further, claim 6 recites the following additional mental process:
“wherein the determining is performed using information provided by a user.” (A person can mentally evaluate information and make a judgement to determine which layer of the hierarchical taxonomy comprises a layer of complexity one level less than the layer associated with the question (MPEP 2106).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 7, it is dependent upon claim 2, and thereby incorporates the limitations of, and corresponding analysis applied to claim 2. Further, claim 7 recites the following additional mental process:
“wherein the determining is performed using information provided with the hierarchical taxonomy.” (A person can mentally evaluate information and make a judgement to determine which layer of the hierarchical taxonomy comprises a layer of complexity one level less than the layer associated with the question (MPEP 2106).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 8, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A non-transitory machine-readable medium having stored thereon at least one program”. A non-transitory machine-readable medium is within one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“selecting at least one layer of the hierarchical taxonomy, wherein the hierarchical taxonomy comprises at least two layers, each of the at least two layers including respective words resulting in the at least two layers having varying levels of complexity” (A person can mentally evaluate the at least two layers and make a judgement to select at least one of them (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“A non-transitory machine-readable medium having stored thereon at least one program, the at least one program including instructions which, when executed by a processor, cause the processor to perform a method in a processor based system for comprehension-based question answering using a hierarchical taxonomy” (Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).)
“receiving a word-based question” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“using a pre-trained language model…” (Merely using a pre-trained language model constitutes mere instructions to apply the exception using a generic computer (MPEP 2106.05(g)).)
“responding to the word-based question using only words associated with the selected at least one layer of the at least two layers of the hierarchical taxonomy” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional element (ii) recites applying the use of a computer as a tool to perform an abstract idea, which is not indicative of significantly more. Additional elements (iii) and (v) recite insignificant extra-solution activities. Further, elements (iii) and (v) recite steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Additional element (iv) recites mere instructions to apply the exception using a generic computer, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claims 9-14, they are dependent upon claim 8, and thereby incorporate the limitations of, and corresponding analysis applied to claim 8. Further, claims 9-14 comprise similar additional limitations to claims 2-7, respectively, and are rejected under the same rationale.
Regarding claim 15, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A system for comprehension-based question answering using a hierarchical taxonomy, comprising: a storage device; and an apparatus; comprising a processor; and a memory coupled to the processor”. A system with physical components such as a processor is within one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“select at least one layer of the hierarchical taxonomy, wherein the hierarchical taxonomy comprises at least two layers, each of the at least two layers including respective words resulting in the at least two layers having varying levels of complexity” (A person can mentally evaluate the at least two layers and make a judgement to select at least one of them (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“A system for comprehension-based question answering using a hierarchical taxonomy, comprising: a storage device; and an apparatus; comprising a processor; and a memory coupled to the processor, the memory having stored therein at least one of programs or instructions executable by the processor to configure the system to” (Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).)
“receive a word-based question” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“using a pre-trained language model…” (Merely using a pre-trained language model constitutes mere instructions to apply the exception using a generic computer (MPEP 2106.05(g)).)
“respond to the word-based question using only words associated with the selected at least one layer of the at least two layers of the hierarchical taxonomy” (Adding insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional element (ii) recites applying the use of a computer as a tool to perform an abstract idea, which is not indicative of significantly more. Additional elements (iii) and (v) recite insignificant extra-solution activities. Further, elements (iii) and (v) recite steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Additional element (iv) recites mere instructions to apply the exception using a generic computer, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claims 16-20, they are dependent upon claim 15, and thereby incorporate the limitations of, and corresponding analysis applied to claim 15. Further, claims 16-20 comprise similar additional limitations to claims 2-5, and 7, respectively, and are rejected under the same rationale.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4-5, &7 are rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Fang, Y. et al. “Hierarchical Graph Network for Multi-hop Question Answering.” Available at https://arxiv.org/pdf/1911.03631 on October 6 2020 (hereafter, FANG)
Regarding claim 1, FANG teaches “A method for comprehension-based question answering using a hierarchical taxonomy” ([Abstract] “In this paper, we present Hierarchical Graph Network (HGN) for multi-hop question answering. To aggregate clues from scattered texts across multiple paragraphs, a hierarchical graph is created by constructing nodes on different levels of granularity (questions, paragraphs, sentences, entities), the representations of which are initialized with pre-trained contextual encoders. Given this hierarchical graph, the initial node representations are updated through graph propagation, and multihop reasoning is performed via traversing through the graph edges for each subsequent sub-task (e.g., paragraph selection, supporting facts extraction, answer prediction). By weaving heterogeneous nodes into an integral unified graph, this hierarchical differentiation of node granularity enables HGN to support different question answering sub-tasks simultaneously. Experiments on the HotpotQA benchmark demonstrate that the proposed model achieves new state of the art, outperforming existing multi-hop QA approaches”)
Further, FANG teaches “receiving a word-based question” (Figure 2)
In Figure 2 above, the question “Q” is shown being received and used as input for the method.
Further, (Page 8, Table 3)
In Table 3, questions are shown to be “word-based”
Further, FANG teaches “selecting at least one layer of the hierarchical taxonomy, wherein the hierarchical taxonomy comprises at least two layers, each of the at least two layers including respective words resulting in the at least two layers having varying levels of complexity” (Figure 2)
The “Paragraph Level” is shown as having two layers, “P1” and “P2”, and each of the two layers include “sentence level” and “entity level” complexities of respective words. In “Multi-task Prediction Module”, is the “Paragraph Selection” item which showcases one of the layers being selected.
Further, FANG teaches “and using a pre-trained language model, responding to the word-based question using only words associated with the selected at least one layer of the at least two layers of the hierarchical taxonomy” ([Pages 5-6, 3.4 Multi-task Prediction] “After graph reasoning, the updated node representations are used for different sub-tasks: (i) paragraph selection based on paragraph nodes; (ii) supporting facts prediction based on sentence nodes; and (iii) answer prediction based on entity nodes and context representation G (Here, the words associated with the selected layer are isolated, and the answer node is notably “based on entity nodes and context representation”). Since the answers may not reside in entity nodes, the loss for entity node only serves as a regularization term. In our HGN model, all three tasks are jointly performed through multi-task learning. The final objective is defined as:
PNG
media_image1.png
56
403
media_image1.png
Greyscale
where
PNG
media_image2.png
24
167
media_image2.png
Greyscale
are hyper-parameters, and each loss function is a cross-entropy loss, calculated over the logits (described below). For both paragraph selection
PNG
media_image3.png
26
64
media_image3.png
Greyscale
and supporting facts prediction
PNG
media_image4.png
24
61
media_image4.png
Greyscale
, we use a two-layer MLP as the binary classifier (A pre-trained language model):
PNG
media_image5.png
23
396
media_image5.png
Greyscale
where
PNG
media_image6.png
25
120
media_image6.png
Greyscale
represents whether a sentence is selected as supporting facts, and
PNG
media_image7.png
22
68
media_image7.png
Greyscale
PNG
media_image8.png
21
33
media_image8.png
Greyscale
represents whether a paragraph contains the ground-truth supporting facts. We treat entity prediction
PNG
media_image9.png
24
73
media_image9.png
Greyscale
as a multiclass classification problem. Candidate entities include all entities in the question and those that match the titles in the context.(Here, the pre-trained language model determines the potential response using the entity nodes) If the ground-truth answer does not exist among the entity nodes, the entity loss is zero. Specifically,
PNG
media_image10.png
26
302
media_image10.png
Greyscale
The entity loss will only serve as a regularization term, and the final answer prediction will only rely on the answer span extraction module as follows. The logits of every position being the start and end of the ground-truth span are computed by a two-layer MLP on top of G in Eqn.(4):
PNG
media_image11.png
28
398
media_image11.png
Greyscale
Following previous work (Xiao et al., 2019), we also need to identify the answer type, which includes the types of span, entity, yes and no. We use a 3-way two-layer MLP for answer-type classification based on the first hidden representation of G:
PNG
media_image12.png
25
305
media_image12.png
Greyscale
During decoding, we first use this to determine the answer type. If it is “yes” or “no”, we directly return it as the answer. Overall, the final cross-entropy loss
PNG
media_image13.png
25
68
media_image13.png
Greyscale
used for training is defined over all the aforementioned logits:
PNG
media_image14.png
21
328
media_image14.png
Greyscale
”) In summary, this citation shows that the “answer” or “response” is determined using only words associated with the selected layers, and that it is determined using a “pre-trained language model.”
Regarding claim 2, FANG teaches the limitations of claim 1. Further, FANG teaches “after receiving the word-based question, associating the word-based question with a layer of the hierarchical taxonomy” (Figure 2) Question “Q” can be seen here being specifically associated with “P1” as can be seen by the line connecting them.
And further, ([Page 5, 3.4 Multi-task Prediction] “After graph reasoning, the updated node representations are used for different sub-tasks: (i) paragraph selection based on paragraph nodes; (ii) supporting facts prediction based on sentence nodes; and (iii) answer prediction based on entity nodes and context representation G.”)
Further, FANG teaches “wherein the selecting at least one layer of the hierarchical taxonomy includes determining which layer of the at least two layers of the hierarchical taxonomy comprises a layer of complexity one level less than the layer of the hierarchical taxonomy associated with the word-based question” (Figure 2) Question “Q” can be seen here being specifically associated with “P1” as can be seen by the line connecting them. Further, selecting P1 is based on the determination that it comprises a layer of complexity one level less than itself associated with the word-based question, in this case, meaning the “Sentence Level” nodes.
This is further described in ([Page 4, Nodes and Edges] “Paragraphs are comprised of sentences, and each sentence contains multiple entities. This graph is naturally encoded in a hierarchical structure, and also motivates how we construct the hierarchical graph. For each paragraph node, we add edges between the node and all the sentences in the paragraph. For each sentence node, we extract all the entities in the sentence and add edges between the sentence node and these entity nodes. Optionally, edges between paragraphs and edges between sentences can also be included in the final graph.
Each type of these nodes captures semantics from different information sources. Thus, the hierarchical graph effectively exploits the structural information across all different granularity levels to learn fine-grained representations, which can locate supporting facts and answers more accurately than simpler graphs with homogeneous nodes.
An example hierarchical graph is illustrated in Figure 2. We define different types of edges as follows: (i) edges between question node and paragraph nodes; (ii) edges between question node and its corresponding entity nodes (entities appearing in the question, not shown for simplicity); (iii) edges between paragraph nodes and their corresponding sentence nodes (sentences within the paragraph); (iv) edges between sentence nodes and their linked paragraph nodes (linked through hyperlinks); (v) edges between sentence nodes and their corresponding entity nodes (entities appearing in the sentences); (vi) edges between paragraph nodes; and (vii) edges between sentence nodes that appear in the same paragraph. Note that a sentence is only connected to its previous and next neighboring sentence. The final graph consists of these seven types of edges as well as four types of nodes, which link the question to paragraphs, sentences, and entities in a hierarchical way.”)
And further, ([Page 5, 3.4 Multi-task Prediction] “After graph reasoning, the updated node representations are used for different sub-tasks: (i) paragraph selection based on paragraph nodes; (ii) supporting facts prediction based on sentence nodes; and (iii) answer prediction based on entity nodes and context representation G.”)
Further, FANG teaches “and wherein, the word-based question is responded to by the pre-trained language model using only words associated with the layer of the at least two layers of the hierarchical taxonomy having the one less level of complexity” ([Page 4, Nodes and Edges] “Paragraphs are comprised of sentences, and each sentence contains multiple entities. This graph is naturally encoded in a hierarchical structure, and also motivates how we construct the hierarchical graph. For each paragraph node, we add edges between the node and all the sentences in the paragraph. For each sentence node, we extract all the entities in the sentence and add edges between the sentence node and these entity nodes. Optionally, edges between paragraphs and edges between sentences can also be included in the final graph.
Each type of these nodes captures semantics from different information sources. Thus, the hierarchical graph effectively exploits the structural information across all different granularity levels to learn fine-grained representations, which can locate supporting facts and answers more accurately than simpler graphs with homogeneous nodes.
An example hierarchical graph is illustrated in Figure 2. We define different types of edges as follows: (i) edges between question node and paragraph nodes; (ii) edges between question node and its corresponding entity nodes (entities appearing in the question, not shown for simplicity); (iii) edges between paragraph nodes and their corresponding sentence nodes (sentences within the paragraph); (iv) edges between sentence nodes and their linked paragraph nodes (linked through hyperlinks); (v) edges between sentence nodes and their corresponding entity nodes (entities appearing in the sentences); (vi) edges between paragraph nodes; and (vii) edges between sentence nodes that appear in the same paragraph. Note that a sentence is only connected to its previous and next neighboring sentence. The final graph consists of these seven types of edges as well as four types of nodes, which link the question to paragraphs, sentences, and entities in a hierarchical way.”)
And further, ([Page 5, 3.4 Multi-task Prediction] “After graph reasoning, the updated node representations are used for different sub-tasks: (i) paragraph selection based on paragraph nodes; (ii) supporting facts prediction based on sentence nodes; and (iii) answer prediction based on entity nodes and context representation G.”) Here we see that the answer (e.g. response) is based on the entity nodes and context representation, which means it is associated with the layer having one less level of complexity. Further, we can see that the answer is comprised of “words” in ([Page 12, Col. 2, “Additional Examples for Error Analysis”] “Below, we provide additional examples for error analysis, where “Q” denotes question, “A” denotes answer provided with dataset and “P” denotes the prediction of proposed model. A full list of all the 100 examples is provided in Table 10 and 11.
Category: Annotation
ID: 5ae2e0fd55429928c4239524
Q: What actor was also a president that Richard
Darman worked with when they were in office?
A: George H. W. Bush
P: Ronald Reagan
ID: 5ab43b755542991779162c21
Q: What sports club based in Hamburg Germany
had a Persian born football player who played for
eight seasons?
A: Mehdi Mahdavikia
P: Hamburger SV
ID: 5a72e28f5542992359bc31ba
Q: Which technique did the director at Pzena
Investment Management outline?
A: outlined by Joel Greenblatt
P: Magic formula investing”)
Regarding claim 4, FANG teaches the limitations of claim 2. Further, FANG teaches “wherein the associating is performed using a machine learning process” ([Page 5, 3.4 Multi-task Prediction] “After graph reasoning, the updated node representations are used for different sub-tasks: (i) paragraph selection based on paragraph nodes; (ii) supporting facts prediction based on sentence nodes; and (iii) answer prediction based on entity nodes and context representation G. Since the answers may not reside in entity nodes, the loss for entity node only serves as a regularization term. In our HGN model, all three tasks are jointly performed through multi-task learning.”)
Regarding claim 5, FANG teaches the limitations of claim 2. Further, FANG teaches “wherein the associating is performed using stored information associating questions with respective layers of at least one hierarchical taxonomy” ([Page 4, 3.2 Context Encoding] “Given the constructed hierarchical graph, the next step is to obtain the initial representations of all the graph nodes. To this end, we first combine all the selected paragraphs into context C, which is concatenated with the question Q and fed into pre-trained Transformer RoBERTa”) In this citation, the selected paragraphs’ info is concatenated with the question (stored information associating questions with respective layers)
Regarding claim 7, FANG teaches the limitations of claim 2. Further, FANG teaches “wherein the determining is performed using information provided with the hierarchical taxonomy” ([Page 4, Nodes and Edges] “Paragraphs are comprised of sentences, and each sentence contains multiple entities. This graph is naturally encoded in a hierarchical structure, and also motivates how we construct the hierarchical graph. For each paragraph node, we add edges between the node and all the sentences in the paragraph. For each sentence node, we extract all the entities in the sentence and add edges between the sentence node and these entity nodes. Optionally, edges between paragraphs and edges between sentences can also be included in the final graph.
Each type of these nodes captures semantics from different information sources. Thus, the hierarchical graph effectively exploits the structural information across all different granularity levels to learn fine-grained representations, which can locate supporting facts and answers more accurately than simpler graphs with homogeneous nodes.
An example hierarchical graph is illustrated in Figure 2. We define different types of edges as follows: (i) edges between question node and paragraph nodes; (ii) edges between question node and its corresponding entity nodes (entities appearing in the question, not shown for simplicity); (iii) edges between paragraph nodes and their corresponding sentence nodes (sentences within the paragraph); (iv) edges between sentence nodes and their linked paragraph nodes (linked through hyperlinks); (v) edges between sentence nodes and their corresponding entity nodes (entities appearing in the sentences); (vi) edges between paragraph nodes; and (vii) edges between sentence nodes that appear in the same paragraph. Note that a sentence is only connected to its previous and next neighboring sentence. The final graph consists of these seven types of edges as well as four types of nodes, which link the question to paragraphs, sentences, and entities in a hierarchical way.”)
And further, ([Page 5, 3.4 Multi-task Prediction] “After graph reasoning, the updated node representations are used for different sub-tasks: (i) paragraph selection based on paragraph nodes; (ii) supporting facts prediction based on sentence nodes; and (iii) answer prediction based on entity nodes and context representation G.”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over FANG, as applied to claims above, and further in view of Wikipedia. “Task-focused interface.” Available at https://en.wikipedia.org/w/index.php?title=Task-focused_interface&oldid=1009584971 on March 1 2021 (hereafter, WIKIPEDIA)
Regarding claim 3, FANG teaches the limitations of claim 2. FANG fails to explicitly teach “wherein the associating is performed by a user via a graphical user input.” However, analogous art of a webpage describing task-focused interfaces, WIKIPEDIA, does teach this ([Paragraph 1] “The task-focused interface is a type of user interface which extends the desktop metaphor of the graphical user interface to make tasks, not files and folders, the primary unit of interaction. Instead of showing entire hierarchies of information, such as a tree of documents, a task-focused interface shows the subset of the tree that is relevant to the task-at-hand. This addresses the problem of information overload when dealing with large hierarchies, such as those in software systems or large sets of documents. The task-focused interface is composed of a mechanism which allows the user to specify the task being worked on and to switch between active tasks, a model of the task context such as a degree-of-interest (DOI) ranking,[1] a focusing mechanism to filter or highlight the relevant documents. The task-focused interface has been validated with statistically significant[2] increases to knowledge worker productivity. It has been broadly adopted by programmers and is a key part of the Eclipse integrated development environment. The technology is also referred to as the "task context" model and the "task-focused programming" paradigm.”) (This citation shows hierarchical information being displayed on a graphical user interface and further shows the user interacting with and specifying parts of it.)
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of FANG with the teachings of WIKIPEDIA because FANG uses hierarchies of information while WIKIPEDIA teaches the use of graphical user interfaces with inputs for the user, in relation to hierarchical information.
One of ordinary skill in the art would be motivated to do so because, as WIKIPEDIA points out in paragraph 1, “This addresses the problem of information overload when dealing with large hierarchies, such as those in software systems or large sets of documents” and “The task-focused interface has been validated with statistically significant[2] increases to knowledge worker productivity.”
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over FANG, as applied to claims above, and further in view of Athira, P. M., et al. “Architecture of an Ontology-Based Domain-Specific Natural Language Question Answering System.” Available at https://arxiv.org/pdf/1311.3175 on November 13 2013 (hereafter, ATHIRA)
Regarding claim 6, FANG teaches the limitations of claim 2. FANG fails to explicitly teach “wherein the determining is performed using information provided by a user.”
However, analogous art of another question answering system, ATHIRA, does teach this ([Abstract] “Question answering (QA) system aims at retrieving precise information from a large collection of documents against a query. This paper describes the architecture of a Natural Language Question Answering (NLQA) system for a specific domain based on the ontological information, a step towards semantic web question answering. The proposed architecture defines four basic modules suitable for enhancing current QA capabilities with the ability of processing complex questions. The first module was the question processing, which analyses and classifies the question and also reformulates the user query. The second module allows the process of retrieving the relevant documents. The next module processes the retrieved documents, and the last module performs the extraction and generation of a response. Natural language processing techniques are used for processing the question and documents and also for answer extraction. Ontology and domain knowledge are used for reformulating queries and identifying the relations. The aim of the system is to generate short and specific answer to the question that is asked in the natural language in a specific domain. We have achieved 94 % accuracy of natural language question answering in our implementation.”)
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of FANG with the teachings of ATHIRA because both references aim to improve the efficiency of QA (Question Answering) machines.
One of ordinary skill in the art would be motivated to do so because FANG is specifically designed to answer questions, and as ATHIRA points out in its abstract, “The aim of the system is to generate short and specific answer to the question that is asked in the natural language in a specific domain”, and having user input and thus, user control, will improve the efficiency of this goal.
Claims 8-9, 11-12, & 14 are rejected under 35 U.S.C. 103 as being unpatentable over FANG, as applied to claims above, and further in view of Singh, Himanshu “Everything you Need to Know About Hardware Requirements for Machine Learning” Available at https://www.einfochips.com/blog/everything-you-need-to-know-about-hardware-requirements-for-machine-learning/ on February 24 2019 (hereafter, SINGH).
Regarding claim 8, it comprises similar limitations to claim 1 and is rejected under the same rationale, with the following additional limitation, which is failed to be explicitly taught by FANG: “A non-transitory machine-readable medium having stored thereon at least one program, the at least one program including instructions”
However, analogous art that teaches the hardware requirements for machine learning models and methods, SINGH, does teach this ([Here are some key insights into these hardware trends:, paragraph 4] “Memory and Storage: Machine learning models are getting larger, requiring more memory and storage capacity. High-bandwidth memory (HBM) and solid-state drives (SSDs) (a non-transitory computer-readable storage medium for storing the models which are made up of code, aka instructions for the processor and other hardware) are becoming critical for training and running these models efficiently.”)
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of FANG with the teachings of SINGH because FANG uses machine learning models for its methods and SINGH describes hardware requirements for machine learning models.
One of ordinary skill in the art would be motivated to do so because if one does not meet the hardware “requirements” for any specific technology used as part of a method, the method will not function.
Regarding claims 9, 11-12, & 14, FANG in view of SINGH teaches the limitations of claim 8. Further, claims 9, 11-12, & 14 recite similar additional limitations as claims 2, 4-5, & 7, respectively, and are rejected under the same rationale.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over FANG in view of SINGH, as applied to claims above, and further in view of WIKIPEDIA, as applied to claims above.
Regarding claim 10, FANG in view of SINGH teaches the limitations of claim 9. Further, claim 10 recites similar additional limitations as claim 3 and is rejected under the same rationale.
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of FANG in view of SINGH with the teachings of WIKIPEDIA because FANG in view of SINGH uses hierarchies of information while WIKIPEDIA teaches the use of graphical user interfaces with inputs for the user, in relation to hierarchical information.
One of ordinary skill in the art would be motivated to do so because, as WIKIPEDIA points out in paragraph 1, “This addresses the problem of information overload when dealing with large hierarchies, such as those in software systems or large sets of documents” and “The task-focused interface has been validated with statistically significant[2] increases to knowledge worker productivity.”
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over FANG in view of SINGH, as applied to claims above, and further in view of ATHIRA, as applied to claims above.
Regarding claim 13, FANG in view of SINGH teaches the limitations of claim 9. FANG in view of SINGH fails to explicitly teach “wherein the determining is performed using information provided by a user.”
However, analogous art of another question answering system, ATHIRA, does teach this ([Abstract] “Question answering (QA) system aims at retrieving precise information from a large collection of documents against a query. This paper describes the architecture of a Natural Language Question Answering (NLQA) system for a specific domain based on the ontological information, a step towards semantic web question answering. The proposed architecture defines four basic modules suitable for enhancing current QA capabilities with the ability of processing complex questions. The first module was the question processing, which analyses and classifies the question and also reformulates the user query. The second module allows the process of retrieving the relevant documents. The next module processes the retrieved documents, and the last module performs the extraction and generation of a response. Natural language processing techniques are used for processing the question and documents and also for answer extraction. Ontology and domain knowledge are used for reformulating queries and identifying the relations. The aim of the system is to generate short and specific answer to the question that is asked in the natural language in a specific domain. We have achieved 94 % accuracy of natural language question answering in our implementation.”)
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of FANG in view of SINGH with the teachings of ATHIRA because both FANG in view of SINGH, and ATHIRA aim to improve the efficiency of QA (Question Answering) machines.
One of ordinary skill in the art would be motivated to do so because FANG in view of SINGH is specifically designed to answer questions, and as ATHIRA points out in its abstract, “The aim of the system is to generate short and specific answer to the question that is asked in the natural language in a specific domain”, and having user input and thus, user control, will improve the efficiency of this goal.
Claims 15, 16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over FANG and further in view of C3 “Infrastructure: Machine Learning Hardware Requirements” Available at https://c3.ai/introduction-what-is-machine-learning/machine-learning-hardware-requirements/ on April 29 2021 (hereafter, C3).
Regarding claim 15, it comprises similar limitations to claim 1 and is rejected under the same rationale, with the following additional limitation, which is failed to be explicitly taught by FANG: “a storage device; and an apparatus; comprising a processor; and a memory coupled to the processor, the memory having stored therein at least one of programs or instructions executable by the processor”.
However, analogous art that teaches the hardware requirements for machine learning models and methods, C3, does teach this ([Processors: CPUs, GPUs, TPUs, and FPGAs] “The processor is a critical consideration in machine learning operations. The processor operates the computer program to execute arithmetic, logic, and input and output commands. This is the central nervous system that carries out machine learning model training and predictions.”) and ([Memory and Storage] “In addition to processor requirements, memory and storage are other key considerations for the AI/ML pipeline. To train or operate a machine learning model, programs require data and code to be stored in local memory to be executed by the processor.”)
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of FANG with the teachings of C3 because FANG uses machine learning models for its methods and C3 describes hardware requirements for machine learning models.
One of ordinary skill in the art would be motivated to do so because if one does not meet the hardware “requirements” for any specific technology used as part of a method, the method will not function.
Regarding claims 16, and 18-19, FANG in view of C3 teaches the limitations of claim 15. Further, claims 16 and 18-19 recite similar additional limitations as claims 2 & 4-5, respectively, and are rejected under the same rationale.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over FANG in view of C3, as applied to claims above, and further in view of WIKIPEDIA, as applied to claims above.
Regarding claim 17, FANG in view of C3 teaches the limitations of claim 16. Further, claim 17 recites similar additional limitations as claim 3 and is rejected under the same rationale.
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of FANG in view of C3 with the teachings of WIKIPEDIA because FANG in view of C3 uses hierarchies of information while WIKIPEDIA teaches the use of graphical user interfaces with inputs for the user, in relation to hierarchical information.
One of ordinary skill in the art would be motivated to do so because, as WIKIPEDIA points out in paragraph 1, “This addresses the problem of information overload when dealing with large hierarchies, such as those in software systems or large sets of documents” and “The task-focused interface has been validated with statistically significant[2] increases to knowledge worker productivity.”
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over FANG in view of C3, as applied to claims above, and further in view of ATHIRA, as applied to claims above.
Regarding claim 20, FANG in view of C3 teaches the limitations of claim 16. Further, FANG teaches “wherein the determining is performed using… information provided with the hierarchical taxonomy” ([Page 4, Nodes and Edges] “Paragraphs are comprised of sentences, and each sentence contains multiple entities. This graph is naturally encoded in a hierarchical structure, and also motivates how we construct the hierarchical graph. For each paragraph node, we add edges between the node and all the sentences in the paragraph. For each sentence node, we extract all the entities in the sentence and add edges between the sentence node and these entity nodes. Optionally, edges between paragraphs and edges between sentences can also be included in the final graph.
Each type of these nodes captures semantics from different information sources. Thus, the hierarchical graph effectively exploits the structural information across all different granularity levels to learn fine-grained representations, which can locate supporting facts and answers more accurately than simpler graphs with homogeneous nodes.
An example hierarchical graph is illustrated in Figure 2. We define different types of edges as follows: (i) edges between question node and paragraph nodes; (ii) edges between question node and its corresponding entity nodes (entities appearing in the question, not shown for simplicity); (iii) edges between paragraph nodes and their corresponding sentence nodes (sentences within the paragraph); (iv) edges between sentence nodes and their linked paragraph nodes (linked through hyperlinks); (v) edges between sentence nodes and their corresponding entity nodes (entities appearing in the sentences); (vi) edges between paragraph nodes; and (vii) edges between sentence nodes that appear in the same paragraph. Note that a sentence is only connected to its previous and next neighboring sentence. The final graph consists of these seven types of edges as well as four types of nodes, which link the question to paragraphs, sentences, and entities in a hierarchical way.”)
And further, ([Page 5, 3.4 Multi-task Prediction] “After graph reasoning, the updated node representations are used for different sub-tasks: (i) paragraph selection based on paragraph nodes; (ii) supporting facts prediction based on sentence nodes; and (iii) answer prediction based on entity nodes and context representation G.”)
FANG in view of C3 fails to explicitly teach “wherein the determining is performed using at least one of information provided by a user…”. However, analogous art of another question answering system, ATHIRA, does teach this ([Abstract] “Question answering (QA) system aims at retrieving precise information from a large collection of documents against a query. This paper describes the architecture of a Natural Language Question Answering (NLQA) system for a specific domain based on the ontological information, a step towards semantic web question answering. The proposed architecture defines four basic modules suitable for enhancing current QA capabilities with the ability of processing complex questions. The first module was the question processing, which analyses and classifies the question and also reformulates the user query. The second module allows the process of retrieving the relevant documents. The next module processes the retrieved documents, and the last module performs the extraction and generation of a response. Natural language processing techniques are used for processing the question and documents and also for answer extraction. Ontology and domain knowledge are used for reformulating queries and identifying the relations. The aim of the system is to generate short and specific answer to the question that is asked in the natural language in a specific domain. We have achieved 94 % accuracy of natural language question answering in our implementation.”)
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of FANG in view of C3 with the teachings of ATHIRA because both references aim to improve the efficiency of QA (Question Answering) machines.
One of ordinary skill in the art would be motivated to do so because FANG in view of C3 is specifically designed to answer questions, and as ATHIRA points out in its abstract, “The aim of the system is to generate short and specific answer to the question that is asked in the natural language in a specific domain”, and having user input and thus, user control, will improve the efficiency of this goal.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW LEE LEWIS whose telephone number is (571)272-1906. The examiner can normally be reached Monday: 12:00PM - 4:00PM and Tuesday - Friday: 12:00PM - 9PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Matthew Lee Lewis/Examiner, Art Unit 2144
/TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144