DETAILED ACTION
Remarks
Claims 1-6 have been examined and rejected. This Office action is responsive to the amendment filed on 10/09/2025, which has been entered in the above identified application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, claim 1 recites “the inference for the user request included in the situation information”. The claim does not previously recite a user request included in the situation information. It is unclear how these limitations relate to the previously recited inference and user request. It is further unclear whether the inference or the user request is included in the situation information For the purposes of examination, this limitation is interpreted as: a second inference for a second user request, wherein the second user request is included in the situation information
Regarding claims 5 and 6, claims 5 and 6 contain substantially similar limitations to those found in claim 1. Consequently, claims 5 and 6 are rejected for the same reasons.
Regarding claims 2-4, claims 2-4 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for depending on an indefinite parent claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 5, and 6
Step 1: Claims 1, 5, and 6 recite a device, a medium, and a method; therefore, they are directed to the statutory categories of a machine, a manufacture, and a method.
Step 2A Prong 1: Claim 1 recites, inter alia:
the inference result being a result of an inference for the user request; making the inference for the user request included in the situation information by using the knowledge base; Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of making an inference, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper.
specifying a desired result from the response information, the desired result being a result desired by the user for the user request; and updating the knowledge base such that the desired result more likely matches the inference result; Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of specifying a desired result and matching results, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper.
wherein the processor assigns the user request to be an inference start node in nodes included in a knowledge graph specified by the knowledge base, calculates, starting from the inference start node, page rank values corresponding to arrival probabilities in a random walk on the knowledge graph, to calculate an importance of the nodes, specifies a node having a highest importance as an inference result node, and determines information indicated by the inference result node to be the inference result; Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of using nodes in a knowledge graph to calculate rank values corresponding to probabilities in a random walk, calculate importances, and determine inferences, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper, or is a mathematical concept that is achievable through mathematical computation.
Step 2A Prong 2: The judicial exception is not integrated into a practical application. The additional elements of “An inference device comprising: an auxiliary storage device to store a knowledge base for inferring an answer to a request; a processor to execute a program; and a memory to store the program which, when executed by the processor, performs processes of”, “A non-transitory computer-readable storage medium storing a program that causes a computer to execute processing, the processing comprising”, “An inference method comprising”, and “a user side device” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h). The claimed computer components are recited at a high level of generality and are merely invoked as tool to perform the abstract idea. The additional elements of “acquiring situation information and response information, the situation information being information including a user request, the user request being a request from a user received via a user side device, the response information including a response of the user to an inference result, the response information being received via the user side device”, and “outputting the inference result to the user side device” amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). Even when viewed in combination, these additional element do not integrate the abstract idea into a practical application and the claim is thus directed to the abstract idea.
Step 2B: The claims do not contain significantly more than the judicial exception. “An inference device comprising: an auxiliary storage device to store a knowledge base for inferring an answer to a request; a processor to execute a program; and a memory to store the program which, when executed by the processor, preforms processes of”, “A non-transitory computer-readable storage medium storing a program that causes a computer to execute processing, the processing comprising”, and “An inference method comprising” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The additional elements of “acquiring situation information and response information, the situation information being information including a user request, the user request being a request from a user, the response information including a response of the user to an inference result”, and “outputting the inference result” amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”). Nothing in the claim provides significantly more than that abstract idea. As such, the claim is ineligible.
Claims 2-4
Step 1: Claims 2-4 recite a device; therefore, they are directed to the statutory category of a machine.
Claims 2-4 merely narrow the previously recited abstract idea limitations. For the reasons described above with respect to claims 1, 5, and 6, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than methods of organizing human activity and mental processes that are practically capable of being performed in the human mind.
Claim 2 further recites when the inference result is different from the desired result, the processor downweights edges included in a shortest path between the inference start node and the inference result node and upweights edges included in the shortest path between the inference start node and a node corresponding to the desired result, in the knowledge graph; Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of comparing results, determining shortest paths, and weighting edges in a knowledge graph, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper, or is a mathematical concept that is achievable through mathematical computation.
Claim 3 further recites wherein the processor specifies a factor causing the desired result from the response information and updates the knowledge base by using the factor; Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining a factor and updating a knowledge base using the factor, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper, or is a mathematical concept that is achievable through mathematical computation.
Claim 4 further recites wherein, when the inference result is different from the desired result, the processor downweights edges included in a shortest path between the inference start node and the inference result node passing through a factor node and upweights edges included in the shortest path between the inference start node and a node corresponding to the desired result passing through the factor node, in the knowledge graph, the factor node being a node indicating the factor; Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of comparing results, determining shortest paths, and weighting edges in a knowledge graph, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper, or is a mathematical concept that is achievable through mathematical computation.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3, 5, and 6 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Okamoto (US 20130185235 A1, published 07/18/2013).
Regarding claim 1, Okamoto teaches the claim comprising:
An inference device comprising: an auxiliary storage device storing a knowledge base for inferring an answer to a request; a processor to execute a program; and a memory to store the program which, when executed by the processor, performs processes of, acquiring situation information and response information, the situation information being information including a user request, the user request being a request from a user received via a user side device, the response information including a response of the user to an inference result, the response information being received via the user side device, the inference result being a result of an inference for the user request; making the inference for the user request included in the situation information by using the knowledge base (Okamoto Figs. 1-4; [0017], in order to obtain the node importance or ranking dependent on a user query (search request), it will be sufficient to perform Markov chain calculation only using portions related to the user qu( ery, not the overall original network; [0018], the "portions" related to the user query are appropriately cut from the network, the Markov chain calculation of the PPR algorithm is performed so as to be limited to the "portions", and thus the node importance or ranking dependent on the user query may be obtained in a shorter time than a case of performing calculation for the overall network; [0020], It is noted that clustering performed in the exemplary embodiment is not clustering of a static structure of nodes and links of the network but clustering of dynamic Markov chain processes where an agent randomly walks on links between nodes on the network; [0022], Markov chains in the overall network are structuralized in a form where plural intentional Markov chains are clustered on the basis of a theory of the machine learning. This clustering is performed in advance (that is, before a search responding to a user query online) using information for each user such as search history in the past of each user; [0066], (1) Observation data is obtained from search history of a user; [0068-0069], The "nodes obtained as a result of search" may be nodes (for example, nodes where fidelity to the search condition is equal to or more than a threshold value, nodes where fidelity to which a page rank value (or a PPR value) is added is equal to or more than a threshold value, a predefined number of nodes where fidelity is in a high rank, or the like) matching with a search condition indicated by the query, or may be nodes which are actually selected by a user from the node group matching with the search condition; [0070], the "user" mentioned here may be an individual or a group formed by people. In any case, it is possible to obtain a cluster division result personalized for the "user" through machine learning of clustering on the network by the use of search history of the "user" in the past; [0124], The procedures of FIG. 2 are executed using coming of a search query from a user (user query) as a trigger; [0146], In FIG. 3, a network information storage device 10 is a storage device storing information (hereinafter, referred to as "network information") regarding a network (for example, WWW) to be processed; [0147], A learning data storage device 14 is a device which stores learning data (the above-described "observation data .tau.") used for learning cluster division of Markov chain processes on the network. If search according to a concern of each user is to be realized, the learning data storage device 14 stores learning data for each user. As the learning data for each user, for example, search history of each user exemplified above may be used. In this case, for example, a search result of a search processor 20 described later may be stored in the learning data storage device 14; [0150], The search processor 20 receives a query from a user and executes the search process in FIG. 2 in real time in relation to the query (online process). In this process, the search processor 20 targets the detailed information (for example, text information of each web page) of each node stored in the network information storage device 10, executes primary search (S30 in FIG. 2) on the basis of a search condition of the user query, and obtains a seed node set. In addition, the processes in steps S32 to S36 are performed using the obtained seed node set and the information regarding the clustering result of the user stored in the clustering result storage device 18, and thereby a node group which is a target of the PPR operation (S40) is selected. Further, a partial network formed only by the selected node group is generated from the information regarding the network structure stored in the network information storage device 10 (S38). In addition, PPR values of the respective nodes are obtained by performing the PPR operation (S40) for the partial network, and the nodes are ranked based on the PPR values (S42), and a response is sent to the user on the basis of the ranking result (S44); [0259], The search system exemplified above is realized, for example, by executing a program which indicates the processes of the above-described respective functional modules in a general purpose computer. Here, the computer has a circuit configuration where, for example, a microprocessor such as a CPU as hardware, memories (primary storage) such as a random access memory (RAM) and a read only memory (ROM));
outputting the inference result to the user side device; specifying a desired result from the response information, the desired result being a result desired by the user for the user request; and updating the knowledge base such that the desired result more likely matches the inference result (Okamoto Figs. 1-4; [0022], clustering is performed in advance (that is, before a search responding to a user query online) using information for each user such as search history in the past of each user; [0023], a calculation range when a user query is processed is restricted using such a prior clustering result; [0066], (1) Observation data is obtained from search history of a user; [0068], The seed vector may be obtained, for example, by considering an N-dimensional vector where values of nodes obtained as a result of search of a user query; [0069], The "nodes obtained as a result of search" may be nodes (for example, nodes where fidelity to the search condition is equal to or more than a threshold value, nodes where fidelity to which a page rank value (or a PPR value) is added is equal to or more than a threshold value, a predefined number of nodes where fidelity is in a high rank, or the like) matching with a search condition indicated by the query, or may be nodes which are actually selected by a user from the node group matching with the search condition. If WWW is considered as the network, each web page is a node, a search condition (for example, a logical expression of keywords) which is transmitted to a search site by a user corresponds to a user query, a vector where web pages matching with the search condition or pages viewed by the user among the web pages are set to "1" and the other pages are set to "0", and the seed vector is obtained by normalizing the vector such that the sum total of components becomes 1. A group of D seed vectors is used as observation data for learning; [0070], it is possible to obtain a cluster division result personalized for the "user" through machine learning of clustering on the network by the use of search history of the "user"; [0075], When observation data is prepared using the above-described method (1), clustering of the Markov chains is equivalent to saying that user queries are clustered. For example, by using observation data corresponding to a query group issued from a certain group (for example, a system such as a company), it is possible to obtain a clustering result which reflects what the group has interests in; [0121], clustering is periodically learned according to the above-described methods using observation data for learning generated from search history; [0147], each time a user makes a new search, the oldest of the D search results may be deleted from the learning data storage device 14, and the search result in this time may be added; [0150], The search processor 20 receives a query from a user and executes the search process in FIG. 2 in real time in relation to the query (online process); a response is sent to the user on the basis of the ranking result (S44); [0259], The search system exemplified above is realized, for example, by executing a program which indicates the processes of the above-described respective functional modules in a general purpose computer. Here, the computer has a circuit configuration where, for example, a microprocessor such as a CPU as hardware, memories (primary storage) such as a random access memory (RAM) and a read only memory (ROM)),
wherein the processor assigns the user request to be an inference start node in nodes included in a knowledge graph specified by the knowledge base, calculates, starting from the inference start node, page rank values corresponding to arrival probabilities in a random walk on the knowledge graph, to calculate an importance of the nodes, specifies a node having a highest importance as an inference result node, and determines information indicated by the inference result node to be the inference result (Okamoto Figs. 1-4; [0009], an importance calculating unit that calculates importance of each node on the partial network by executing an operation of a personalized PageRank algorithm having the node group matching with the search condition as a seed vector for the cut partial network, and generates a search result to the user regarding the search condition on the basis of the calculated importance; [0017], However, in order to obtain the node importance or ranking dependent on a user query (search request), it will be sufficient to perform Markov chain calculation only using portions related to the user query, not the overall original network; [0021], a typical PageRank algorithm is modeled as the movement of an agent wandering about the overall network without particular intention; [0024], Introduction: Formulation of PPR Algorithm Based on Bayesian Framework; [0027], Each component p.sub.n (where n is an integer of 1 to N) of the active vector p is the probability of an agent existing at a node; [0028], an agent follows links on the network and randomly walks with the passage of time; [0052], In the above description, the prior probability indicated by Equation (1.1) is interpreted to model "a random walker wandering about the network at random". According to the Bayesian formula, it is also possible to assume a "random walker searching the network with a sense of purpose", that is, intending to search for a certain region (related to, for example, medical science); [0140], Importance of each of N(t) nodes on the partial network is set using a page rank value (Expression 85) of the steady state obtained by the PPR algorithm of Equation (A2); [0141], That is to say, Expression 86 indicates importance of a node i. The N.sup.(t) nodes are ranked depending on the size of the importance set in this way (S42). In addition, an evaluation value to which both the calculated importance of a node and the fidelity of the node to the user query (search condition) are added may be obtained, and each node may be ranked in higher order in the evaluation value. In addition, response information to the user query is generated according to the ranking result, and the response information returns to the user issuing the query (S44). The response information is, for example, a list of nodes arranged in ranking order (for example, a list where links to a web page are arranged in ranking order); [0142], Through the above-described procedures, the response information to the user query is obtained in consideration of a PPR value. Although, in this search process, calculation of the PPR algorithm shown in Equation (A2) is performed, a target of the calculation is not the overall network but a partial network formed only by clusters having a high attribution degree (similarity) to the seed node set. For this reason, costs needed for the calculation are much smaller than in the case of the typical PPR algorithm operation targeting the overall network. Therefore, a search result is obtained in substantially real time for a user query; [0155], It can be understood that the Markov chains on the network indicate movements of an agent (random walker) which follows links and moves on the network at random; [0158], (2) As a result of observing a location where the random walker moving about a region corresponding to the cluster defined stochastically in (1) is currently present, information regarding which side the random walker is present is obtained; [0159], It is assumed that an observation is performed several times independently, and thus a large amount of data regarding a location where the random walker is present is collected. By assigning the data to the probability model, "likelihood" is obtained. The likelihood is a function of a (temporarily determined) clustering structure or a (temporarily determined) community structure. Therefore, a clustering structure (community structure) is set so as to maximize the likelihood; [0168], With this setting, the Markov chains can be regarded as a random walker which is an agent present on a network, and p.sub.n(t) can be understood as a probability that the agent (random walker) may be found at the node n at the time point t)
Regarding claims 5 and 6, claims 5 and 6 contain substantially similar limitations to those found in claim 1. Consequently, claims 5 and 6 are rejected for the same reasons.
Regarding claim 3, Okamoto teaches all the limitations of claim 1, further comprising:
wherein the processor specifies a factor causing the desired result from the response information and updates the knowledge base by using the factor (Okamoto Figs. 1-4; [0066], (1) Observation data is obtained from search history of a user; [0068], The seed vector may be obtained, for example, by considering an N-dimensional vector where values of nodes obtained as a result of search of a user query; [0069], The "nodes obtained as a result of search" may be nodes (for example, nodes where fidelity to the search condition is equal to or more than a threshold value, nodes where fidelity to which a page rank value (or a PPR value) is added is equal to or more than a threshold value, a predefined number of nodes where fidelity is in a high rank, or the like) matching with a search condition indicated by the query, or may be nodes which are actually selected by a user from the node group matching with the search condition. If WWW is considered as the network, each web page is a node, a search condition (for example, a logical expression of keywords) which is transmitted to a search site by a user corresponds to a user query, a vector where web pages matching with the search condition or pages viewed by the user among the web pages are set to "1" and the other pages are set to "0", and the seed vector is obtained by normalizing the vector such that the sum total of components becomes 1. A group of D seed vectors is used as observation data for learning; [0075], When observation data is prepared using the above-described method (1), clustering of the Markov chains is equivalent to saying that user queries are clustered. For example, by using observation data corresponding to a query group issued from a certain group (for example, a system such as a company), it is possible to obtain a clustering result which reflects what the group has interests in; [0121], clustering is periodically learned according to the above-described methods using observation data for learning generated from search history; [0147], each time a user makes a new search, the oldest of the D search results may be deleted from the learning data storage device 14, and the search result in this time may be added; [0150], The search processor 20 receives a query from a user and executes the search process in FIG. 2 in real time in relation to the query (online process); a response is sent to the user on the basis of the ranking result (S44))
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Okamoto in view of Endo et al. (US 20170364310 A1, 12/21/2017), hereinafter Endo.
Regarding claim 2, Okamoto teaches all the limitations of claim 1, further comprising:
when the inference result is different from the desired result, the processor downweights edges between the inference start node and the inference result node and upweights edges between the inference start node and a node corresponding to the desired result, in the knowledge graph (Okamoto Figs. 1-4; [0066], (1) Observation data is obtained from search history of a user; [0068], The seed vector may be obtained, for example, by considering an N-dimensional vector where values of nodes obtained as a result of search of a user query; [0069], may be nodes which are actually selected by a user from the node group matching with the search condition; [0121], clustering is periodically learned according to the above-described methods using observation data for learning generated from search history; [0150], The search processor 20 receives a query from a user and executes the search process in FIG. 2 in real time in relation to the query (online process). In this process, the search processor 20 targets the detailed information (for example, text information of each web page) of each node stored in the network information storage device 10, executes primary search (S30 in FIG. 2) on the basis of a search condition of the user query, and obtains a seed node set. In addition, the processes in steps S32 to S36 are performed using the obtained seed node set and the information regarding the clustering result of the user stored in the clustering result storage device 18, and thereby a node group which is a target of the PPR operation (S40) is selected. Further, a partial network formed only by the selected node group is generated from the information regarding the network structure stored in the network information storage device 10 (S38). In addition, PPR values of the respective nodes are obtained by performing the PPR operation (S40) for the partial network, and the nodes are ranked based on the PPR values (S42), and a response is sent to the user on the basis of the ranking result (S44); [0161], Here, n and m are 1, 2, . . . , and N, and N is a total number of nodes included in the network. In addition, in the example of the above-described system, an element A.sub.nm of the n-th row and the m-th column of the adjacent matrix has a value of either 0 or 1, that is, 1 if there is a link from a node m to a node n on a network, and 0 if there is no link thereon. In contrast, in the following example, the element A.sub.nm does not have a value of either 0 or 1 but has an analog value, that is, any one value in a certain continuous numerical value range (for example, a real number of 0 to 1). A value of the element A.sub.nm indicates the intensity (weight) of a link from the node m to the node n)
However, Okomato fails to expressly the processor downweights edges included in a shortest path between the inference start node and the inference result node and upweights edges included in the shortest path between the inference start node and a node corresponding to the desired result, in the knowledge graph. In the same field of endeavor, Endo teaches:
the processor downweights edges included in a shortest path between the inference start node and the inference result node and upweights edges included in the shortest path between the inference start node and a node corresponding to the desired result, in the knowledge graph (Endo Figs. 1-10; [0021], FIG. 1A shows an example of a task knowledge base. This task knowledge base is a knowledge base including knowledge related to the execution of tasks; [0041], In FIG. 4, the natural language processor 22 extracts a main concept from text (input sentence) outputted from the speech recognition unit 21 or character input unit 12 (step S401). For example, when the user makes a speech “I want something warm,” the natural language processor 22 analyzes the syntax of the input sentence and extracts the object “something warm.”; [0049], the interaction processor 23 selects the shortest path from among the paths retrieved as potential paths (S802). In the present embodiment, weights are previously assigned to the relatives with respect to the relatedness between concepts. The interaction processor 23 calculates the sum of the weights for each of the paths from the node 5b serving as a main concept to the root node 1a and selects one of the paths on the basis of the sizes of the sums. For example, smaller weights are assigned to relatives whose concepts have closer relatedness. Specifically, 0.5, 1.0, 3.0, and 10.0 are assigned to relatives IsA, HasFeature, RelatedTo, and Antonym, respectively. In this case, the weighted distances of the paths shown in FIG. 7, a path through “tea,” a path through “hot,” a path through “cold,” a path through “water,” and a path through “soup,” become 4.0, 5.0, 5.0, 13.5, and 13.5, respectively. The interaction processor 23 selects a path where concepts have the closest relatedness, that is, a path whose weighted distance is the smallest. In this case, the shortest distance is a path through “tea.”; [0054], the shortest path is selected using the weights of the relatives (S802 in FIG. 8). If paths having the shortest graph distance (that is, paths having the smallest number of edges) are defined as the shortest paths, a path through “soup,” a path through “water,” and a path through “tea,” whose graph distances are 3 in FIG. 7, are selected as the shortest paths. However, the path through “soup” includes “salad.” If “salad” is extracted as an important related concept, the interaction processing system 100 would inappropriately reply to the user's request “I want something warm” with a response sentence “We have salad. How about it?” This is because “soup” and “salad” on the path are associated with each other by a relative “Antonym” (antonym). Accordingly, it is preferred to make a path including a relative “Antonym” (antonym) less likely to be selected as the shortest path. According to the present embodiment, smaller weights are assigned to relatives whose concepts have closer relatedness and thus a path having a distance having the smallest weight is selected. Thus, an unfavorable important related concept can be made less likely to be extracted)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the processor downweights edges included in a shortest path between the inference start node and the inference result node and upweights edges included in the shortest path between the inference start node and a node corresponding to the desired result, in the knowledge graph as suggested in Endo into Okomato. Doing so would be desirable because there has been disclosed a knowledge base system that responds to a request from the user using knowledge bases (see Endo [0002]); however, the above conventional technology needs to be further improved (see Endo [0004]). According to the interaction processing method and system and non-transitory storage medium storing a program for executing the processing method of the present disclosure, the response ability is improved (see Endow [0006]). Additionally, assigning smaller weights to relatives whose concepts have closer relatedness (see Endo [0054]), would better incorporate the feedback of Okomoto ([0069]) and thus, an unfavorable important related concept can be made less likely to be extracted (see Endo [0054]).
Regarding claim 4, Okamoto teaches all the limitations of claim 3, further comprising:
wherein, when the inference result is different from the desired result, the processor downweights edges between the inference start node and the inference result node passing through a factor node and upweights edges between the inference start node and a node corresponding to the desired result passing through the factor node, in the knowledge graph, the factor node being a node indicating the factor (Okamoto Figs. 1-4; [0066], (1) Observation data is obtained from search history of a user; [0068], The seed vector may be obtained, for example, by considering an N-dimensional vector where values of nodes obtained as a result of search of a user query; [0069], may be nodes which are actually selected by a user from the node group matching with the search condition; [0121], clustering is periodically learned according to the above-described methods using observation data for learning generated from search history; [0150], The search processor 20 receives a query from a user and executes the search process in FIG. 2 in real time in relation to the query (online process). In this process, the search processor 20 targets the detailed information (for example, text information of each web page) of each node stored in the network information storage device 10, executes primary search (S30 in FIG. 2) on the basis of a search condition of the user query, and obtains a seed node set. In addition, the processes in steps S32 to S36 are performed using the obtained seed node set and the information regarding the clustering result of the user stored in the clustering result storage device 18, and thereby a node group which is a target of the PPR operation (S40) is selected. Further, a partial network formed only by the selected node group is generated from the information regarding the network structure stored in the network information storage device 10 (S38). In addition, PPR values of the respective nodes are obtained by performing the PPR operation (S40) for the partial network, and the nodes are ranked based on the PPR values (S42), and a response is sent to the user on the basis of the ranking result (S44); [0161], Here, n and m are 1, 2, . . . , and N, and N is a total number of nodes included in the network. In addition, in the example of the above-described system, an element A.sub.nm of the n-th row and the m-th column of the adjacent matrix has a value of either 0 or 1, that is, 1 if there is a link from a node m to a node n on a network, and 0 if there is no link thereon. In contrast, in the following example, the element A.sub.nm does not have a value of either 0 or 1 but has an analog value, that is, any one value in a certain continuous numerical value range (for example, a real number of 0 to 1). A value of the element A.sub.nm indicates the intensity (weight) of a link from the node m to the node n)
However, Okomato fails to expressly the processor downweights edges included in a shortest path between the inference start node and the inference result node passing through a factor node and upweights edges included in the shortest path between the inference start node and a node corresponding to the desired result passing through the factor node, in the knowledge graph, the factor node being a node indicating the factor. In the same field of endeavor, Endo teaches:
the processor downweights edges included in a shortest path between the inference start node and the inference result node passing through a factor node and upweights edges included in the shortest path between the inference start node and a node corresponding to the desired result passing through the factor node, in the knowledge graph, the factor node being a node indicating the factor (Endo Figs. 1-10; [0021], FIG. 1A shows an example of a task knowledge base. This task knowledge base is a knowledge base including knowledge related to the execution of tasks; [0041], In FIG. 4, the natural language processor 22 extracts a main concept from text (input sentence) outputted from the speech recognition unit 21 or character input unit 12 (step S401). For example, when the user makes a speech “I want something warm,” the natural language processor 22 analyzes the syntax of the input sentence and extracts the object “something warm.”; [0049], the interaction processor 23 selects the shortest path from among the paths retrieved as potential paths (S802). In the present embodiment, weights are previously assigned to the relatives with respect to the relatedness between concepts. The interaction processor 23 calculates the sum of the weights for each of the paths from the node 5b serving as a main concept to the root node 1a and selects one of the paths on the basis of the sizes of the sums. For example, smaller weights are assigned to relatives whose concepts have closer relatedness. Specifically, 0.5, 1.0, 3.0, and 10.0 are assigned to relatives IsA, HasFeature, RelatedTo, and Antonym, respectively. In this case, the weighted distances of the paths shown in FIG. 7, a path through “tea,” a path through “hot,” a path through “cold,” a path through “water,” and a path through “soup,” become 4.0, 5.0, 5.0, 13.5, and 13.5, respectively. The interaction processor 23 selects a path where concepts have the closest relatedness, that is, a path whose weighted distance is the smallest. In this case, the shortest distance is a path through “tea.”; [0054], the shortest path is selected using the weights of the relatives (S802 in FIG. 8). If paths having the shortest graph distance (that is, paths having the smallest number of edges) are defined as the shortest paths, a path through “soup,” a path through “water,” and a path through “tea,” whose graph distances are 3 in FIG. 7, are selected as the shortest paths. However, the path through “soup” includes “salad.” If “salad” is extracted as an important related concept, the interaction processing system 100 would inappropriately reply to the user's request “I want something warm” with a response sentence “We have salad. How about it?” This is because “soup” and “salad” on the path are associated with each other by a relative “Antonym” (antonym). Accordingly, it is preferred to make a path including a relative “Antonym” (antonym) less likely to be selected as the shortest path. According to the present embodiment, smaller weights are assigned to relatives whose concepts have closer relatedness and thus a path having a distance having the smallest weight is selected. Thus, an unfavorable important related concept can be made less likely to be extracted)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the processor downweights edges included in a shortest path between the inference start node and the inference result node passing through a factor node and upweights edges included in the shortest path between the inference start node and a node corresponding to the desired result passing through the factor node, in the knowledge graph, the factor node being a node indicating the factor as suggested in Endo into Okomato. Doing so would be desirable because there has been disclosed a knowledge base system that responds to a request from the user using knowledge bases (see Endo [0002]); however, the above conventional technology needs to be further improved (see Endo [0004]). According to the interaction processing method and system and non-transitory storage medium storing a program for executing the processing method of the present disclosure, the response ability is improved (see Endow [0006]). Additionally, assigning smaller weights to relatives whose concepts have closer relatedness (see Endo [0054]), would better incorporate the feedback of Okomoto ([0069]) and thus, an unfavorable important related concept can be made less likely to be extracted (see Endo [0054]).
Response to Arguments
The Examiner acknowledges the Applicant’s amendments to claims 1, 5, and 6. The correction to the title is approved, and the objection to the title is respectfully withdrawn. The correction to claim 1 is approved, and the previous objection to claim 1 is respectfully withdrawn. Applicant alleges that the claims, as amended, particularly point out and distinctly claim the subject matter which Applicant regards as the invention. Examiner respectfully disagrees.
As discussed in the rejection, the metes and bounds of the claimed invention are unclear. Per the MPEP 2173, the definiteness of claim language is to ensure that the scope of the claims is clear so the public is informed of the boundaries of what constitutes infringement of the patent. During examination, a claim must be given its broadest reasonable interpretation consistent with the specification as it would be interpreted by one of ordinary skill in the art. If the language of a claim, given its broadest reasonable interpretation, is such that a person of ordinary skill in the relevant art would read it with more than one reasonable interpretation, then a rejection under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph is appropriate (see MPEP 2173). As discussed in the rejection, Examiner determined that the language of claims 1, 5, and 6 was indefinite and claims 1-6 stand rejected stand rejected under 35 U.S.C. 112(b).
Applicant further alleges that "acquiring situation information and response information, the situation information being information including a user request, the user request being a request from a user received via a user side device, the response information including a response of the user to an inference result, the response information being received via the user side device, the inference result being a result of an inference for the user request; making the inference for the user request included in the situation information by using the knowledge base; [and] outputting the inference result to the user side device" clarifies the inputs and outputs of the inference device and cause the claims to be directed to a practical application where a request is received from a user side device and the result is sent to the user side device as well. Thus, claim 1 is directed to a patent eligible practical application of the alleged abstract idea under step 2A prong 2 of the Alice Mayo test. Examiner respectfully disagrees.
As discussed in the rejection above, the claims are directed to an abstract idea that encompasses mental processes including evaluations or observations that are practically capable of being performed in the human mind with the assistance of pen and paper, and mathematical concepts that are achievable through mathematical computation. The claims places no limits on how the inferring, specifying results, and calculations are performed. That is, nothing in the claim element precludes the step from practically being performed in the mind. Thus, the broadest reasonable interpretation of the steps is that those steps fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04.
Regarding “acquiring situation information and response information, the situation information being information including a user request, the user request being a request from a user received via a user side device, the response information including a response of the user to an inference result, the response information being received via the user side device”, and “outputting the inference result to the user side device”, these limitations were analyzed in Step 2A Prong 2 to determine whether they recited additional elements that integrate the exception into a practical application and Step 2B to determine whether they recited additional elements that amount to an inventive concept (aka “significantly more”) than the recited judicial exception. The other recited claim limitations were similarly analyzed. As discussed above, these limitations were determined to amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of post-solution activity is an element that is not integrated into the claim as a whole, e.g., a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent (see MPEP 2106.05(g)). When determining whether an additional element is insignificant extra-solution activity, examiners may consider the following: (1) Whether the extra-solution limitation is well known, (2) Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or t