DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment/Status of Claims
Claims 1, 3, 4, 6, 8, 9, 11, 13, and 14 were amended.
Claims 2, 5, 7, 10, 12, and 15 were cancelled.
Claims 1, 3, 4, 6, 8, 9, 11, 13, and 14 are pending and examined herein.
Claims 1, 3, 4, 6, 8, 9, 11, 13, and 14 are rejected under 35 U.S.C. 101.
Claims 1, 6, and 11 are rejected under 35 U.S.C. 102.
Claims 3, 4, 8, 9, 13, and 14 are rejected under 35 U.S.C. 103.
Response to Arguments
Applicant’s arguments, see page 8, filed 12/23/2025, with respect to the objection of the specification have been fully considered and are persuasive. The objection of the specification has been withdrawn. Note that the original specification including page 7 has been uploaded to the application file.
The objections of claims 2, 5, 7, and 12 are moot as the claims objected to were cancelled.
Applicant’s arguments, see page 8, filed 12/23/2025, with respect to the 35 U.S.C. 112(b) rejection of claims 3, 4, 8, 9, 13, and 14 have been fully considered and are persuasive. The rejection of claims 3, 4, 8, 9, 13, and 14 has been withdrawn. The 35 U.S.C. 112(b) rejection of claims 2, 5, 7, 10, 12, and 15 is moot as the claims have been cancelled.
Applicant's arguments filed 12/23/2025 with respect to the 35 U.S.C. 101 rejection of claims 1, 3, 4, 6, 8, 9, 11, 13, and 14 have been fully considered but they are not persuasive.
Applicant argues, see pages 9-11, that "In summary, the present application solves the issue of the prior art and therefore describes a specific solution to a technological problem, and the claims focus on specific improvements to the fields of artificial intelligence, intelligent search, knowledge graphs, and natural language processing."
Examiner respectfully disagrees.
MPEP 2106.05(a) states "If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim, or identifies technical improvements realized by the claim over the prior art. For example, in McRO, the court relied on the specification’s explanation of how the particular rules recited in the claim enabled the automation of specific animation tasks that previously could only be performed subjectively by humans, when determining that the claims were directed to improvements in computer animation instead of an abstract idea. McRO, 837 F.3d at 1313-14, 120 USPQ2d at 1100-01. In contrast, the court in Affinity Labs of Tex. v. DirecTV, LLC relied on the specification’s failure to provide details regarding the manner in which the invention accomplished the alleged improvement when holding the claimed methods of delivering broadcast content to cellphones ineligible. 838 F.3d 1253, 1263-64, 120 USPQ2d 1201, 1207-08 (Fed. Cir. 2016)."
The specification does not describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Applicant presents a problem and a solution of the invention, however, the problem solved is not present in the specification, and one of ordinary skill in the art would not be able to ascertain the improvement from the specification. Therefore, the claims are not directed to an improvement and do not contain eligible subject matter. See amended 35 U.S.C. 101 rejection below.
Applicant's arguments filed 12/23/2025 regarding the 35 U.S.C. 102 and 35 U.S.C. 103 rejection of claims 1, 3, 4, 6, 8, 9, 11, 13, and 14 have been fully considered but they are not persuasive. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., "the system question in the present application is triggered dynamically by the information that is not included in each user input question but exists in the plurality of insurance rules.") are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). See amended 35 U.S.C. 102 and 35 U.S.C. 103 rejections below.
Claim Objections
Claims are objected to because of the following informalities:
Claims 1, 6, and 11 state “generating the triplet is generated based on”. This should likely be “generating the triplet based on”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 4, 6, 8, 9, 11, 13, and 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject
matter. The analysis of claims 1, 3, 4, 6, 8, 9, 11, 13, and 14, in accordance with these steps, follows.
Step 1 Analysis:
Step 1 is to determine whether the claim is directed to a statutory category (process, machine,
manufacture, or composition of matter. Claims 1, 3, 4, 6, 8, and 9 are directed to a process and claims 11, 13, and 14 are directed to a machine. All claims are directed to statutory categories and analysis proceeds.
Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis:
Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101.
None of the claims represent an improvement to technology.
Regarding claim 1, the following claim elements are abstract ideas:
searching for an answer in an insurance knowledge graph based on the target insurance rule, wherein the insurance knowledge graph is generated based on the plurality of insurance rules, each insurance rule is presented as a triplet in the knowledge graph (Given the knowledge graph containing rule triplets and a rule, one could practically search for an answer in an insurance knowledge graph. This is a mental process. One could also create a knowledge graph with triplets for the rules, given the insurance rules, practically in the human mind. This is also a mental process.)
wherein the each insurance rule is presented as a triplet in the insurance knowledge graph comprises: (Presenting rules as triplets in a knowledge graph can be practically performed in the human mind. This is a mental process.)
dividing the insurance rules into primary rules and sub-rules to generate interrelationship or subordination relation between the insurance rules; and (Dividing rules into primary and sub-rules to generate relations between the rules can be practically performed in the human mind. This is a mental process.)
generating the triplet is generated based on a relationship among the subjects of the each insurance rule; (Generating a triple based on relationships between the subjects can be practically performed in the human mind. This is a mental process.)
wherein obtaining the target insurance rule matching the user question from the plurality of insurance rules, comprises: (Obtaining a rule matching the user question can be practically performed in the human mind. This is a mental process.)
obtaining a match result by matching the user question with the plurality of insurance rules, wherein the match result comprises at least one insurance rule involved in the user question; (Obtaining a match result by matching the question with rules involved in the question can be practically performed in the human mind. This is a mental process.)
in response to determining that the match result comprises a plurality of insurance rules, (Determining that the match result comprises a plurality of rules can be practically performed in the human mind. This is a mental process.)
obtain a current match result by continuing to match a user reply to the system question with the match result, until one insurance rule is included in the match result; and (Matching a user reply to the question until one insurance rule is included can be practically performed in the human mind. This is a mental process.)
determining the insurance rule included in the current match result as the target insurance rule. (Determining the rule as the target rule is the mental process of evaluation.)
The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception:
A method for knowledge answering, performed by an electronic device, comprising: (This limitation recites generic computer components. This amounts to mere instructions to apply an exception.)
receiving a user question entered by a client, and obtaining a target insurance rule matching the user question from a plurality of insurance rules; (Receiving and obtaining data are both known processes on computers. This amounts to mere instructions to apply an exception.)
returning the answer to the client. (This is the insignificant extra-solution activity of transmitting data over a network. See MPEP § 2106.05(d)(II), list 1, example i.)
sending, based on information contained within the plurality of insurance rules but not included in the user question, a system question to the client in a preset order from a primary rule to a sub-rule, and (Sending a question to the client is the insignificant extra-solution activity of transmitting data over a network. See MPEP § 2106.05(d)(II), list 1, example i.)
Regarding claim 3, the rejection of claim 1 is incorporated herein. The following are abstract ideas:
converting texts of the plurality of insurance rules into insurance rule character strings; and (Converting text into character strings can be practically performed in the human mind. This is a mental process.)
The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception:
obtaining the match result by invoking a regular expression matching function re.match, using the insurance rule character strings as a matching pattern, to match against the user question (Invoking a regular expression matching to match strings is a known process in computing and amounts to mere instructions to apply an exception.)
Regarding claim 4, the rejection of claim 1 is incorporated herein. The following are abstract ideas:
obtaining the match result by determining whether the user question matches each of the insurance rules (Determining if the user question matches each of the insurance rules can be practically performed in the human mind. This is the mental process of evaluation.)
The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception:
through a preset question-answer model, wherein the preset question-answer model is configured to perform binary classification determining on both the user question and each of the insurance rules. (This limitation recites a generic machine learning model and a generic machine learning process. This amounts to mere instructions to apply an exception.)
Regarding claim 6, the following are abstract ideas:
generating an insurance knowledge graph based on the plurality of insurance rules, wherein each of the insurance rules is presented as a triplet in the insurance knowledge graph; and (One could practically construct a knowledge graph based on rules, wherein the rules are presented as triplets in the human mind with the aid of pen and paper. This is a mental process.)
extracting a plurality of insurance rules based on insurance information (One could extract rules from insurance documents practically in the human mind, i.e. extract subject-predicate-object triples from the document. This is a mental process.)
The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception:
A method for generating a knowledge answering system, performed by an electronic device, comprising: (This limitation recites generic machine learning and generic computer components. This amounts to mere instructions to apply an exception.)
by using a pre-trained extraction model; (This recites a generic machine learning model and processes. This amounts to mere instructions to apply an exception.)
generating a reasoning engine based on the plurality of insurance rules, wherein the reasoning engine is configured to: (This recites generic machine learning components and processes. This amounts to mere instructions to apply an exception.)
wherein the reasoning engine is further configured to: (This recites generic machine learning components and processes. This amounts to mere instructions to apply an exception.)
The remainder of claim 6 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis.
Claims 8-9 recite substantially similar subject matter to claims 3-4 respectively and are rejected with the same rationale, mutatis mutandis.
Regarding claim 11, the following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception:
An electronic device, comprising: (This recites a generic computer component; this amounts to mere instructions to apply an exception.)
at least one processor; and (This recites a generic computer component; this amounts to mere instructions to apply an exception.)
a memory, configured to store instructions executable by the at least one processor, (This recites generic computer components and processes; this amounts to mere instructions to apply an exception.)
wherein when the instructions are executed by the at least one processor, the at least one processor is caused to implement a method for knowledge answering, the method comprising: a memory, configured to store instructions executable by the at least one processor, (This recites generic computer components and processes; this amounts to mere instructions to apply an exception.)
The remainder of claim 11 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis.
Claims 13-14 recite substantially similar subject matter to claims 3-4 respectively and are rejected with the same rationale, mutatis mutandis.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 6, and 11 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bakis (US 2020/0042642 A1).
Regarding claim 1, Bakis teaches
A method for knowledge answering, performed by an electronic device, comprising: (The abstract states "As part of the invention, a semantic matcher is provided to select among the answers provided by the plurality of knowledge sources for a best answer to a user query." [0025] states "With reference now to FIG. 2, a block diagram of a data processing system is shown in which illustrative embodiments may be implemented. Data processing system 200 is an example of a computer, such as server 104 or client 110 in FIG. 1, in which computer-usable program code or instructions implementing the processes may be located for the illustrative embodiments.")
receiving a user question entered by a client, and obtaining a target insurance rule matching the user question from a plurality of insurance rules; ([0079] states "Within the knowledge base, there are a plurality of nodes, each of which represent an RDF node for entities which are relevant for the auto insurance task. So in conversing with the user, the conversational interface would start at the root node for an insurance question, node 621. As the conversation with the user progresses, it develops that the user wants a policy, so the system traverses to node 623, and that of the policies offered, he wants a vehicle policy, node 625. As the conversation progresses, the conversational interface will parse the natural language meaning of the users' questions to navigate between the nodes. At each node, if not provided by the knowledge base, the system will retrieve the appropriate information from the No-SQL DB 601 and SQL DB 602." As the system matches the user question to nodes which represent insurance rules, the traversed to node is interpreted as the target insurance rule.)
searching for an answer in an insurance knowledge graph based on the target insurance rule, wherein the insurance knowledge graph is generated based on the plurality of insurance rules, each insurance rule is presented as a triplet in the insurance knowledge graph; and ([0138] states "In response to the chat bot window, the user indicates (1) “I want insurance” as a natural language input. As indicated above, the conversational interface will parse the semantic meaning of the user input and determine the searchable entities in the input, e.g., “insurance”. A search in the knowledge graph will locate the insurance node 1003 and also the related nodes business insurance 1005, vehicle insurance 1007 and homeowner's insurance 1009." [0080] states "First, the interface extracts the entity (subject, object) and relationship to the natural language user input. Next, a semantic matcher is applied to the extracted entities and relationships to construct the pairs and the RDF triples." Therefore, the triples (rules) are used to search for an answer in the knowledge graph. [0069] states "First, the Implicit Dialog process inherits the existing schema and business logic by preserving the existing web schemas (HTML tag paths, DOM tree) by extracting each piece of data (like facts or triples) and associating the extracted data with a schema path automatically. The preserved information is stored in a natural language conversation overlay for the web site. In preferred embodiments, the Implicit Dialog process stores the overlay in a knowledge graph; other databases are used to store information extracted from the web sites in different manners. The knowledge graph serves as the natural language conversation overlay and is used by the conversation interface to answer user questions." Therefore, the knowledge graph is generated by the triples, interpreted as the insurance rules. [0079] states "Within the knowledge base, there are a plurality of nodes, each of which represent an RDF node for entities which are relevant for the auto insurance task." Therefore, the insurance rules are presented as RDF triples in the knowledge graph.)
returning the answer to the client; ([0140] states "In response to the user query (7), the interface determines that the user is looking for insurance discounts for older people by the semantic meaning of the query. The best node from the auto insurance node 1013 is the mature discounts node 1017 which includes the path to nodes 1025 and 1027 from which the conversational interface formulates the system response (8)." The system response is interpreted as the answer, which is returned to the client, see the example conversation [0130] - [0137].)
wherein the each insurance rule is presented as a triplet in the insurance knowledge graph comprises: ([0079] states "Within the knowledge base, there are a plurality of nodes, each of which represent an RDF node for entities which are relevant for the auto insurance task." Therefore, the insurance rules are presented as RDF triples in the knowledge graph.)
dividing the insurance rules into primary rules and sub-rules to generate interrelationship or subordination relation between the insurance rules; and (As shown in Fig. 6, the knowledge graph contains root node insurance and following nodes for other rules. Another example is rule vehicle with sub-rules cycle, auto, atv, rv, and boat. As there is a hierarchy to the knowledge graph, the insurance rules are divided into primary rules and sub-rules. [0059] states "Each candidate triple contains a pair of sentence elements and the relationship between the two elements." [0067] "The lower panel 505 shows the triple < “total loss adjustment procedure” (subject), "has-item” (relationship), “Remove your license plates and personal items from the vehicle” (object) > was extracted from the webpage." “has-item” is a subordination relation that is indicated in the knowledge graph by the hierarchy/division of rules.)
generating the triplet is generated based on a relationship among the subjects of the each insurance rule; ([0059] states "In step 409, the extracted sentence / DOM path pairs are processed by a dependency parser. A dependency parser analyzes the grammatical structure of a sentence and establishes the relationships between the elements in the sentence. Typically, the parser will establish the relationship between the “head” words or subjects of the sentence and the words which modify the head words, the objects. In step 411, the output of the dependency parser is used to construct a set of candidate triples. Each candidate triple contains a pair of sentence elements and the relationship between the two elements.")
wherein obtaining the target insurance rule matching the user question from the plurality of insurance rules, comprises: ([0079] states "Within the knowledge base, there are a plurality of nodes, each of which represent an RDF node for entities which are relevant for the auto insurance task. So in conversing with the user, the conversational interface would start at the root node for an insurance question, node 621. As the conversation with the user progresses, it develops that the user wants a policy, so the system traverses to node 623, and that of the policies offered, he wants a vehicle policy, node 625. As the conversation progresses, the conversational interface will parse the natural language meaning of the users' questions to navigate between the nodes. At each node, if not provided by the knowledge base, the system will retrieve the appropriate information from the No-SQL DB 601 and SQL DB 602." As the system matches the user question to nodes which represent insurance rules, the traversed to node is interpreted as the target insurance rule.)
obtaining a match result by matching the user question with the plurality of insurance rules, wherein the match result comprises at least one insurance rule involving in the user question; ([0138] states "Referring now to the knowledge graph depicted in FIG. 10 and the dialog above, the user enters the xyz.com web site. The user either starts the chat bot, or the chat bot starts automatically as the web site application notes the user presence browsing the web site for a predetermined time period. In response to the chat bot window, the user indicates ( 1 ) “ I want insurance ” as a natural language input. As indicated above, the conversational interface will parse the semantic meaning of the user input and determine the searchable entities in the input, e.g., “ insurance ”. A search in the knowledge graph will locate the insurance node 1003 and also the related nodes business insurance 1005, vehicle insurance 1007 and homeowner's insurance 1009. The nodes and related nodes are interpreted as the match result.)
in response to determining that the match result comprises a plurality of insurance rules, sending, based on information contained within the plurality of insurance rules but not included in the user question, a system question to the client in a preset order form a primary rule to a sub-rule, and ([0138] states "Referring now to the knowledge graph depicted in FIG. 10 and the dialog above, the user enters the xyz.com web site. The user either starts the chat bot, or the chat bot starts automatically as the web site application notes the user presence browsing the web site for a predetermined time period. In response to the chat bot window, the user indicates ( 1 ) “ I want insurance ” as a natural language input. As indicated above, the conversational interface will parse the semantic meaning of the user input and determine the searchable entities in the input, e.g., “ insurance ”. A search in the knowledge graph will locate the insurance node 1003 and also the related nodes business insurance 1005, vehicle insurance 1007 and homeowner's insurance 1009. Thus, the conversational interface needs for the user to be more specific in the type of insurance, and generates the system response ( 2 ) “What kind of insurance ? Business, Vehicle or Homeowners?” using the information in the knowledge graph and the rules for creating the system response." As the match result contained nodes and related nodes, the match result comprises a plurality of insurance rules. The question is generated based on the information in the rules but not included in the question, namely business insurance 1005, vehicle insurance 1007 and homeowner's insurance 1009. [0138]-[0139] state "In response to the chat bot window, the user indicates (1) “I want insurance” as a natural language input. As indicated above, the conversational interface will parse the semantic meaning of the user input and determine the searchable entities in the input, e.g., “insurance”. A search in the knowledge graph will locate the insurance node 1003 and also the related nodes business insurance 1005, vehicle insurance 1007 and homeowner's insurance 1009. Thus, the conversational interface needs for the user to be more specific in the type of insurance, and generates the system response (2) “What kind of insurance? Business, Vehicle or Homeowners?” using the information in the knowledge graph and the rules for creating the system response. [0139] The user responds (3) “ I want vehicle insurance ”. The conversational interface parses the semantic meaning and determines the searchable entity is " vehicle insurance ”. Using that information as well as context data from the dialog log and dialog state from the persistent layer, the interface will progress to the vehicle insurance node and find the related nodes boat insurance 1011, auto insurance 1013 and motorcycle insurance 1015. Again, the interface deter mines that it needs the user to be more specific and generates the system response ( 4 ) “ What kind of vehicle insurance ? Auto insurance, boat insurance or motorcycle insurance ? ”." Following the nodes in Fig. 10, the questions are asked in a specific order from the primary rules at the top of the graph, to the sub-rules at the bottom of the graph. ) obtaining a current match result by continuing to match a user reply to the system question with the match result, until one instance rule is included in the current match result; and ([0139] states "The user responds (3) “I want vehicle insurance”. The conversational interface parses the semantic meaning and determines the searchable entity is " vehicle insurance”. Using that information as well as context data from the dialog log and dialog state from the persistent layer, the interface will progress to the vehicle insurance node and find the related nodes boat insurance 1011, auto insurance 1013 and motorcycle insurance 1015. Again, the interface determines that it needs the user to be more specific and generates the system response (4) “What kind of vehicle insurance? Auto insurance, boat insurance or motorcycle insurance?”.” [0140] states "The user responds (5) “Auto” and based on the semantic meaning, the interface will progress to the auto insurance node 1013. Here, there are rules for the conversational interface to return the insurance quote link 1023 and URL information 1029 in system response (6). In response to the user query (7), the interface determines that the user is looking for insurance discounts for older people by the semantic meaning of the query. The best node from the auto insurance node 1013 is the mature discounts node 1017 which includes the path to nodes 1025 and 1027 from which the conversational interface formulates the system response (8)." Therefore, user replies are matched to rules until one is included in the current match result, which in this case is the mature discounts node.)
determining the insurance rule included in the current match result as the target insurance rule. ([0140] states “The best node from the auto insurance node 1013 is the mature discounts node 1017 which includes the path to nodes 1025 and 1027 from which the conversational interface formulates the system response (8).” As this is the final rule, and the system formulates a response from it, the system determines this to be the target insurance rule.)
Regarding claim 6, Bakis teaches
A method for generating a knowledge answering system, performed by an electronic device, comprising: (The abstract states "As part of the invention, a semantic matcher is provided to select among the answers provided by the plurality of knowledge sources for a best answer to a user query." [0025] states "With reference now to FIG. 2, a block diagram of a data processing system is shown in which illustrative embodiments may be implemented. Data processing system 200 is an example of a computer, such as server 104 or client 110 in FIG. 1, in which computer-usable program code or instructions implementing the processes may be located for the illustrative embodiments.")
extracting a plurality of insurance rules based on insurance information by using a pre-trained extraction model; ([0042] states "As shown in the figure, the domain corpus 301 is fed into the deep learning process 303 which in a preferred embodiment is a sequence-to-sequence learning process. The domain corpus 301 is collected by extracting knowledge from the target web site and possibly outside data sources by one or more knowledge extraction and integration modules (not shown)." The data sources are interpreted as the insurance information, as the knowledge graph constructed by the data is insurance data. See Fig. 5 and Fig. 6. [0042] further states "The deep learning process 303 uses the domain corpus 301 to provide input to the new dialog question and answer construction module 305, the new triple construction module 307 and the new table construction module 309." The triples that are constructed are interpreted as the plurality of insurance rules.)
generating an insurance knowledge graph based on the plurality of insurance rules, wherein each of the insurance rules is presented as a triplet in the insurance knowledge graph; and ([0045] states "The new triple construction module 307 provides the triple input used to construct a knowledge graph model 317." See Fig. 6, as the knowledge graph is an insurance knowledge graph. As the triplets are used for construction, the insurance rules are presented as triplets in the knowledge graph.)
generating a reasoning engine based on the plurality of insurance rules, wherein the reasoning engine is configured to: ([0005] states "According to this disclosure, a method, apparatus and computer program product for creating a dialog system for web content is described. Knowledge is extracted from a target web application for the dialog system. The knowledge includes an organizational structure of the target web application and domain knowledge pertinent to the target web application. A deep learning process associates the domain knowledge with the organization structure of the target application. A plurality of knowledge sources of different respective types are created from the domain knowledge and the organizational structure. Each of the knowledge sources is used for providing answers to user queries to the dialog system. As part of the invention, a semantic matcher is provided to select among the answers provided by the plurality of knowledge sources for a best answer to a user query." The dialog system is interpreted as the reasoning engine. As the knowledge graph is a knowledge source, is part of the dialog system, and is constructed based on insurance rules, the reasoning engine is generated based on the insurance rules.)
The remainder of claim 6 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis.
Regarding claim 11, Bakis teaches
An electronic device, comprising: ([0020] states "With reference now to the drawings and in particu lar with reference to FIGS. 1-2, exemplary diagrams of data processing environments are provided in which illustrative embodiments of the disclosure may be implemented.")
at least one processor; and ([0026] states "Processor unit 204 serves to execute instructions for software that may be loaded into memory 206.")
a memory, configured to store instructions executable by the at least one processor, ([0026] states "Processor unit 204 serves to execute instructions for software that may be loaded into memory 206.")
wherein when the instructions are executed by the at least one processor, the at least one processor is caused to implement a method for knowledge answering, the method comprising: ([0030] states "Instructions for the operating system and applications or programs are located on persistent storage 208. These instructions may be loaded into memory 206 for execution by processor unit 204. The processes of the different embodiments may be performed by processor unit 204 using computer implemented instructions, which may be located in a memory, such as memory 206. These instructions are referred to as program code, computer usable program code, or computer-readable program code that may be read and executed by a processor in processor unit 204. The program code in the different embodiments may be embodied on different physical or tangible computer-readable media, such as memory 206 or persistent storage 208.")
The remainder of claim 11 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis.
Claim 12 recites substantially similar subject matter to claim 2 and is rejected with the same rationale, mutatis mutandis.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 3, 8, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bakis (US 2020/0042642 A1) as applied to claim 1 above, and further in view of Lee (“Processing SPARQL queries with regular expressions in RDF databases”, 2011).
Regarding claim 3, the rejection of claim 1 is incorporated herein. Bakis teaches
plurality of insurance rules (See explanation of claim 1. Note that the insurance rules are represented using triples.)
user question (See explanation of claim 1.)
Bakis does not appear to explicitly teach
converting texts of the [triples] into [triple] character strings
obtain the match result by invoking a regular expression matching function re.match, using [triple] character strings as a matching pattern, to match against the [query]
However, Lee—directed to analogous art—teaches
converting texts of the [triples] into [triple] character strings (Page 8 states "The GetNext() function of REGSCAN operator calls the GetNext() function of the root operator of its subplan to get the candidate triple IDs and then converts each triple ID into three string IDs using the dictionary built in the index building algorithm. For example, REGSCAN operator in Figure 4(a) gets the triple ID by calling the GetNext() function of the root operator (IDXAND) of the regular expression sub-plan. Then, each triple ID is converted into the string IDs, and REGSCAN returns these string IDs to the FILTER operator to verify the result.")
obtain the match result by invoking a regular expression matching function re.match, using [triple] character strings as a matching pattern, to match against the [query] (Page 6 states "To support regular expression queries, we develop a new operator, called REGSCAN, and adapt it to the query processing engine. For a triple pattern matching with a regular expression in a SPARQL query, the REGSCAN operator finds candidate triples which can be matched with that pattern in a database." REGSCAN is interpreted as the regular expression matching function re.match. Page 8 states "The GetNext() function of REGSCAN operator calls the GetNext() function of the root operator of its subplan to get the candidate triple IDs and then converts each triple ID into three string IDs using the dictionary built in the index building algorithm.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Bakis and Lee because, as stated by Lee on page 2, "Moreover, since users usually do not know the exact matching values of an RDF triple pattern, this example presents a common kind of request over RDF data and thus shows the necessity of supporting regular expression processing in SPARQL. It therefore motivates us to study the regular expression processing in RDF systems which, to the best of our knowledge, has not been efficiently supported by any of the existing RDF systems."
Claims 8 and 13 recite substantially similar subject matter to claims 3 and are rejected with the same rationale, mutatis mutandis.
Claim(s) 4, 9, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bakis (US 2020/0042642 A1) as applied to claim 1 above, and further in view of Sun (“Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text”, 2018).
Regarding claim 4, the rejection of claim 1 is incorporated herein. Bakis teaches
obtaining the match result by determining whether the user question matches each of the insurance rules through a preset question-answer model, ([0074] states "In preferred embodiments, a semantic matching function is employed in the Implicit Dialog system is based on Deep Learning process, where the words or phrases (entities and relationships) from user natural language input are mapped to word embedding vectors of searchable variables in both the knowledge graph database and the relational database in a low-dimensional space." [0074] further states "As used in the disclosure, a low-dimensional space uses word vectors in neural networks for distributed word representations. This is also known as a word embedding model, e.g. word2vec, Glove. Using such neural networks, the system can represent each word by a set of numbers." One of ordinary skill in the art would recognize word2vec and Glove as pre-trained word embedding models, and they are therefore interpreted as the preset question-answer model.)
Bakis does not appear to explicitly teach
wherein the preset question-answer model is configured to perform binary classification determining on both the user question and each of the insurance rules.
However, Sun—directed to analogous art—teaches
wherein the preset question-answer model is configured to perform binary classification determining on both the user question and each of the insurance rules. (Page 3 states “The question
q
ands its answers
a
q
induce a labeling of the nodes in
V
q
: we let
y
v
=
1
if
v
∈
a
q
and
y
v
=
0
otherwise for all
v
∈
V
q
. The task of QA then reduces to performing binary classification over the nodes of the graph
G
q
.” As graph is composed of triples, matching the question to the graph is matching the question to the rules. 5 states "The final representations
h
v
L
∈
R
n
, are used for binary classification to select the answers:".
Bakis teaches the question-answer model and question-rule matching, and Sun teaches the use of binary classification for matching. One of ordinary skill could have combined the elements as claimed by known methods, as one could substitute the calculation of matching using the pre-trained embeddings for the binary classification calculation of matching of Sun. In combination, each element performs the same function as it does separately; the pre-trained word embeddings model provides the embeddings, and the binary classification provides the result of matching. One of ordinary skill in the art would have recognized that the results of the combination were predictable: receiving a classification of match or no match for the question and the rule. This would allow for several rules to match. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Bakis and Sun.
Claims 9 and 14 recite substantially similar subject matter to claim 4 and are rejected with the same rationale, mutatis mutandis.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA THUY PHAM whose telephone number is (571)272-2605. The examiner can normally be reached Monday - Friday, 9 A.M. - 5:00 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.T.P./Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121