Prosecution Insights
Last updated: April 19, 2026
Application No. 17/548,117

COGNITIVE PLATFORM FOR KNOWLEDGE EXTRACTION FROM HETEROGENOUS DATA SOURCES AND THE METHOD THEREOF

Non-Final OA §101§103
Filed
Dec 10, 2021
Examiner
DEVORE, CHRISTOPHER DILLON
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Infosys Limited
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
5 granted / 10 resolved
-5.0% vs TC avg
Strong +42% interview lift
Without
With
+41.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
33 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
39.0%
-1.0% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/03/2025 has been entered. Response to Arguments Remarks pages 9-11, Applicant contends: A judicial exception has not been correctly identified. Response: The rejections under 101 identify the related aspects for the abstract ideas by noting under the limitations the interpretation as well as a supporting MPEP sections. To assist in understanding, some more detailed descriptions will be given. "synthesizing, by the computing device, a first set of entities based on the extracted raw data" Under BRI (MPEP 2111), this claim limitation is seen as indicating that data is read to interpret something, such as entities, from the data. Reading and understanding data is a skill a person of ordinary skill in the art holds. Figuring out or creating an "entity" based on data is a broad statement that enables the interpretation of a person understanding and noting what is being read or interpreted, such as a person identifying Barack Obama refers to a president when reading a sentence "President Barack Obama gave the soldier a medal." No limitations appear that indicate what is required for the limitation is not capable of being performed by a person of ordinary skill in the art using pen and paper. The noting of being done by a computing device is interpreted as a machine performing the abstract idea, which does not prevent the interpretation of an abstract idea. MPEP 2106.04(a)(2)(3): “An example of a case identifying a mental process performed in a computer environment as an abstract idea is Symantec Corp., 838 F.3d at 1316-18, 120 USPQ2d at 1360. In this case, the Federal Circuit relied upon the specification when explaining that the claimed electronic post office, which recited limitations describing how the system would receive, screen and distribute email on a computer network, was analogous to how a person decides whether to read or dispose of a particular piece of mail and that ‘with the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper’.” Aspects as to how extracting data in "extracting, by the computing device, one or more concepts and one or more connectors..." is an abstract idea is noted in the previous office action response to arguments for the argument of "It is practically impossible for human mind to extract data from heterogenous data sources..." To summarize for understanding of some of the other limitations, many of the limitation note aspects relating to understanding and interpretating data. Understanding and interpretating data is a skill one of ordinary skill in the art is capable of performing mentally or with pen and paper. The data being manipulated within the invention does not appear to be data that a person could not practically understand, as figure 4 of the invention shows that both a text and image variation of a knowledge model exists that are readable and understandable. Remarks pages 11-13, Applicant contends: The claims do not recite a judicial exception. Response: The applicant notes aspects of the limitations appear to resemble steps from SRI International, Inc. V. Cisco Systems, Inc. 930 F.3d 1295, 1304 (Fed. Cir. 2019). However, network traffic and network monitors is not seen as the same type of data or structures being analyzed, thus the limitations are not seen as being related to the arguments made in SRI International, Inc. V. Cisco Systems, Inc. 930 F.3d 1295, 1304 (Fed. Cir. 2019). Aspects of that case are related to a type of information that is deemed likely not very practical for a human to interpret (network packets) and the use of a possibly particular machine (network monitors). The current invention is directed to information that is interpretable by a person practically (as shown by figure 4 of current application) and utilizes a generic computer (a computing device). As a result, the interpretation of a mental process seems sensible to apply to the current limitations. MPEP 2106.04(Section 3): "The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 (2012) ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same)." Remarks pages 13-15, Applicant contends: The claims are integrated into a practical application of any alleged judicial exception. Response: The section from the MPEP noted by the applicant (MPEP 2106.04(d)(l)) notes the “apply or use the judicial exception in a meaningful way beyond generally linking…”. The current application is not seen as having an application or use of the judicial exceptions within the claim limitations in order to integrate into a practical application. The current limitations manipulate data, but a use or application of the data is not shown anywhere in the claims. Claim 1 contains a limitation of extracting data, but as noted in that rejection that is considered a well understood, routine, conventional activity and thus integrating into a practical application. Adding limitations that give a particular use or application of the data created by the current application could be a possible way to integrate into a practical application. A specific recommendation on possible uses of the data is not given, but an example would be how the applicant or a client would use the knowledge model created by the current application. An important note is elements that can be an application of the data or abstract ideas of the current application must be more than mere “apply it” MPEP 2106.05(f): "The Supreme Court has identified additional elements as mere instructions to apply an exception in several cases. For instance, in Mayo, the Supreme Court concluded that a step of determining thiopurine metabolite levels in patients’ blood did not amount to significantly more than the recited laws of nature, because this additional element simply instructed doctors to apply the laws by measuring the metabolites in any way the doctors (or medical laboratories) chose to use. 566 U.S. at 79, 101 USPQ2d at 1968. In Alice Corp., the claim recited the concept of intermediated settlement as performed by a generic computer. The Court found that the recitation of the computer in the claim amounted to mere instructions to apply the abstract idea on a generic computer. 573 U.S. at 225-26, 110 USPQ2d at 1984. The Supreme Court also discussed this concept in an earlier case, Gottschalk v. Benson, 409 U.S. 63, 70, 175 USPQ 673, 676 (1972), where the claim recited a process for converting binary-coded-decimal (BCD) numerals into pure binary numbers. The Court found that the claimed process had no meaningful practical application except in connection with a computer. Benson, 409 U.S. at 71-72, 175 USPQ at 676. The claim simply stated a judicial exception (e.g., law of nature or abstract idea) while effectively adding words that "apply it" in a computer." The previous response to arguments noted that abstract ideas cannot integrate an application to satisfy 101 (MPEP 2106.05(a)). Remarks pages 15-18, Applicant contends: The claims recite elements constituting “significantly more” than any abstract idea. Response: The elements of the current application, as argued earlier in response to arguments, involve abstract ideas. Abstract ideas are not able to integrate an application to satisfy 101 (MPEP 2106.05(a)) which was also noted in the previous response to arguments in the previous office action. In regards to the newer arguments of supposedly needing to show how the elements are convention to perform, what such a requirement is not understood. Claim 1 recites one indication of a well understood, routine, conventional activity of extracting data which is supported by a part of the specification as the reasoning (see MPEP 2106.05(d) example v in computer functions). The abstract ideas were argued earlier in the response to arguments to be considered practical to perform in the human mind (noted in case the idea of conventional was intending to argue related to the abstract ideas). Other non-abstract ideas, such as elements in Step 2A prong 2 and Step 2B, have a MPEP section for support in the corresponding 101 rejection. The current application does not appear to provide a “discrete implementation” of the abstract ideas. As noted earlier in the responses to arguments, the claims do not appear to give a use or application of the data manipulated in the claims. The claims also appear to only utilize a generic computer for performing the steps (MPEP 2106.04(a)(2)(3)). The data was also argued earlier to be shown to be human readable or understandable practically by figure 4 of the current application representing the knowledge model as a readable graph or a JSON file. As a result, the current limitations are not seen as providing a “discrete implementation”. The arguments from the applicant are not seen as convincing, thus the 101 rejections are maintained. Remarks pages 18-20, Applicant contends: Tung does not disclose “domain values associated with the set of concepts and the set of connectors”. Tung does not disclose the amended limitation. Response: Tung 0023 notes as presented in previous office actions: “The KDMS [based on a pre-existing knowledge model] 1 maintains a record of expected input and output types, in terms of concepts and relationships [wherein the pre-existing knowledge model comprises a set of concepts, a set of connectors, and domain values associated with the set of concepts and the set of connectors], for each data processing pipeline.” When considering the definition of a domain value, which is indicated by the current application to be an instance of a concept or connector ([Current Application 0014]: “Domain - Domain may be a collection of possible values that belong to a concept or a connector. Size of the domain will depend on the kind of the concept or a connector. Some domains are large (like customer identifiers) and some small (like color). Each possible value in a domain is an instance of the concept or the connector. For instance large, medium, small maybe the domain for the concept 'size'; Adam, Smith, Sriram, or employee numbers maybe the domain for the concept 'customer'.”), Tung is seen as teaching domain values. Tung is noted as teaching concepts and relationships (which can be considered a form of connection). Since the definition of domain values is instances of concepts and relationships, then Tung showing/teaching that a record of concepts and relationships is considered showing instances of concepts and connections, especially when considered in support with Figure 2 of Tung showing things such as a “President” being the child of “Politician”. In regards to Tung not teaches the newly amended limitations, Applicant’s arguments with respect to claim(s) 1, 6, 11, 16, 17, and 18 have been considered but are moot because the new ground of rejection contain elements that have not been previously examined or does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. As a result of Applicant’s arguments in regards to 102s/103s not being convincing, 102/103 rejections are maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-6, 8-11, and 13-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards an abstract idea without significantly more. In regards to Claim 1: Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter? Yes, it is directed towards a method, so a process. Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 1 recites the following abstract ideas: synthesizing, by the computing device, a first set of entities based on the extracted raw data This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. extracting, by the computing device, one or more concepts and one or more connectors by classifying each entity of the synthesized first set of entities as a concept or a connector, based on a pre-existing knowledge model, wherein the pre-existing knowledge model comprises a set of concepts, a set of connectors, and domain values associated with the set of concepts and the set of connectors This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. identifying, by the computing device, relations between the extracted one or more concepts and the one or more connectors, based on the first data source and the pre-existing knowledge model This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. generating, by the computing device, a first knowledge structure based on the identified relations This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. extracting, by the computing device, a second set of entities from a second data source This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. mapping the extracted second set of entities to the extracted one or more concepts, the extracted one or more connectors, and the identified relations, based on the generated first knowledge structure, wherein the second data source is associated with the first data source This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. converting, by the computing device, the mapping of the extracted second set of entities into one or more data structures This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. generating, by the computing device, a second knowledge structure from the converted one or more data structures This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. updating, by the computing device, a knowledge graph store based on the generated second knowledge structure This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. the mapping includes calculating respective distances between each of the second set of entities and one of the extracted one or more concepts or one of the extracted one or more connectors This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. This interpretation is done with the description of distance given in [Current Application 0038]. The indication of needing to follow some form of “vector similarities, rule matching logic or any other appropriate technology” is broad and enables the interpretation of using a person’s mind to determine distance. Or if a knowledge model is used to help determine the distance that would still be an abstract idea as a knowledge model is shown to be human readable in figure 4 where the calculation or evaluation of the distance from reading the knowledge model is a process a human could perform. ([Current Application 0038]: “A pre-existing knowledge model (208) may be used to classify select entities from the synthesized data as concepts or connectors. The knowledge model may have pre-built set of concepts, connectors along with a domain for each of those concepts and connectors. To classify entities into concepts and connectors, techniques such as vector similarities, rule matching logic or any other appropriate technology may be applied. It calculates the distance between the entity and the nearest concept (or connector) either directly or through one of its domain values. For instance, a knowledge model may identify a concept called size through its domain values like Small, Medium, Large etc. The concept classifier could map an entity value 'extra- large' after synthesis to the concept 'size' after comparing it with the existing domain values in the pre-existing knowledge models.”) Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 1 recites the following additional elements: extracting, by a computing device, raw data from a first data source This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 1 recites the following additional elements: extracting, by a computing device, raw data from a first data source This limitation is directed towards the insignificant extra solution activity of mere data gathering (see MPEP § 2106.05(g)). This is a well understood, routine, conventional activity of extracting data (see MPEP 2106.05(d) example v in computer functions). In regards to Claim 3: Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 3 recites the following additional elements: wherein the pre-existing knowledge model is are augmented with the generated first knowledge structure At a high level of generality, this is an activity of using a pre-existing first knowledge structures and a first knowledge structure as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 3 recites the following additional elements: wherein the pre-existing knowledge model is are augmented with the generated first knowledge structure At a high level of generality, this is an activity of using a pre-existing first knowledge structures and a first knowledge structure as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, the pre-existing first knowledge structures augmenting the first knowledge structure does not appear to amount to more than a recitation of the words “apply it”. In regards to Claim 4: Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 4 recites the following abstract ideas: identifying, by the computing device, a set of values related to the extracted one or more concepts and the one or more connectors from the first data source This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. In regards to Claim 5: Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 5 recites the following abstract ideas: removing, by the computing device, noise from the first data source for extracting the one or more concepts and the one or more connectors This limitation is directed towards the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Here it is seen as evaluation. In regards to Claim 6: Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter? Yes, it is directed towards a system, so a machine. Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 6 recites the same abstract ideas as analogous claim 1. Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 6 recites the following additional elements beyond what is noted in analogous claim 1: A system for knowledge extraction from heterogeneous data sources comprising a processor and a memory coupled to the processor, wherein the processor is configured to execute instructions stored in the memory to At a high level of generality, this is an activity of using a processor and a memory as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 6 recites the following additional elements beyond what is noted in analogous claim 1: A system for knowledge extraction from heterogeneous data sources comprising a processor and a memory coupled to the processor, wherein the processor is configured to execute instructions stored in the memory to At a high level of generality, this is an activity of using a processor and a memory as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a processor and a memory appears to be an implementation of the abstract idea on a computer, so merely using a computer as a tool to perform the abstract idea. In regards to Claim 8: Claim 8 is analogous to claim 3 and thus contains the same 101 rejections. In regards to Claim 9: Claim 9 is analogous to claim 4 and thus contains the same 101 rejections. In regards to Claim 10: Claim 10 is analogous to claim 5 and thus contains the same 101 rejections. In regards to Claim 11: Step 1: Is the claim directed towards a process, machine, manufacture, or composition of matter? Yes, it is directed towards a medium, so a manufacture or product. Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 11 recites the same abstract ideas noted in analogous claim 1. Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 11 recites the following additional elements beyond what is noted in analogous claim 1: A non-transitory computer readable medium with instructions stored thereon, that when executed by a processor, cause the processor to perform operations At a high level of generality, this is an activity of using a processor and a computer readable medium as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 11 recites the following additional elements beyond what is noted in analogous claim 1: A non-transitory computer readable medium with instructions stored thereon, that when executed by a processor, cause the processor to perform operations At a high level of generality, this is an activity of using a processor and a computer readable medium as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, a processor and a computer readable medium appears to be an implementation of the abstract idea on a computer, so merely using a computer as a tool to perform the abstract idea. In regards to Claim 13: Claim 13 is analogous to claim 3 and thus contains the same 101 rejections. In regards to Claim 14: Claim 14 is analogous to claim 4 and thus contains the same 101 rejections. In regards to Claim 15: Claim 15 is analogous to claim 5 and thus contains the same 101 rejections. In regards to claim 16: Step 2A Prong 1: Does the claim recite a law of nature, a natural phenomenon, or an abstract idea? Yes, the claim does recite a(n) abstract idea. Claim 16 recites the following abstract ideas: the one or more data structures include one or more tuples This limitation is directed towards a continuation of the abstract idea of a mental process, or a concept performed in the human mind, including observation, evaluation, judgement or opinion (see MPEP 2106.04(a)(2) subsection 3). Noting that the data can take the form of a tuple does not change or alter whether limitations are directed towards an abstract idea. Step 2A Prong 2: Does the claim recite additional elements that integrate the exception into a practical application of the exception? No, the application does not recite any additional elements that would integrate the abstract idea into a practical application. Claim 16 recites the following additional elements: the one or more data structures include one or more tuples; and updating the knowledge graph store includes at least one of inserting the one or more tuples as one or more nodes or one or more edges in a graph store or updating one or more nodes or one or more edges in the graph store to correspond to the one or more tuples At a high level of generality, this is an activity of updating the knowledge graph store as an “apply it” use (see MPEP 2106.05(f)). Step 2B: Does the claim as a whole amount to significantly more than the judicial exception? No, the claim as a whole does not amount to significantly more than the judicial exception. All elements of the claim, viewed individually or wholistically, do not provide an inventive concept or otherwise significantly more than the abstract idea itself. Claim 16 recites the following additional elements: the one or more data structures include one or more tuples; and updating the knowledge graph store includes at least one of inserting the one or more tuples as one or more nodes or one or more edges in a graph store or updating one or more nodes or one or more edges in the graph store to correspond to the one or more tuples At a high level of generality, this is an activity of updating the knowledge graph store as an “apply it” use (see MPEP 2106.05(f)). At said high level of generality, inserting tuples or updating using tuples as one or more nodes or one or more edges does not appear to amount to more than a recitation of the words “apply it”. In regards to Claim 17: Claim 17 is analogous to claim 16 and thus contains the same 101 rejections. In regards to Claim 18: Claim 18 is analogous to claim 16 and thus contains the same 101 rejections. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-6, 8-11, 13-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tung et al (US 20200012738 A1), referred to as Tung in this document, and further in combination of Paulheim ("Knowledge graph refinement: A survey of approaches and evaluation methods"), referred to as Paulheim in this document. Regarding Claim 1: Tung teaches: A method for knowledge extraction from heterogeneous data sources, comprising, [Tung 0013]: “Second, the knowledge graph is hydrated with information by ingesting knowledge from multiple data sources and different knowledge extraction techniques [A method for knowledge extraction from heterogeneous data sources comprising] (e.g., natural language processing (NLP), schema mapping, computer visions, or the like) to create the vertices and edges in the knowledge graph. Each data source may create its own data processing pipeline for extracting data to include into the knowledge graph being constructed.” extracting, by a computing device, raw data from a first data source [Tung 0020]: “Referring back to the processing engine 115 shown in FIG. 1, the processing engine 115 is configured to extract data from different data sources in response to a received information query. For example, the processing engine 115 may apply F techniques on data to obtain an intent, sentiment, and/or nuances and context of spoken sentences of data to more accurately extract pertinent data. The processing engine 115 may further apply structured data denormalization and normalization techniques when parsing the data from the different data sources to better extract data [extracting, by a computing device, raw data from a first data source]. The processing engine 115 may further apply computer vision/object detection and recognition when parsing the data from the different data sources to better extract data. The techniques applied by the processing engine 115 to extract data from the different data sources may be considered to be the first step in the knowledge graph hydration process.” synthesizing, by the computing device, a first set of entities based on the extracted raw data [Tung 0018]: “The processing layer 110 also includes a pipeline repository 114 that stores data extraction rules, techniques, and protocols for extracting data from different data sources. The pipeline repository 114 stores a repository of instruction code and instruction protocols that can be used to extract entities [synthesizing, by the computing device, a first set of entities based on the extracted raw data as getting entities from data using instructions is interpreted as synthesizing entities from the data under BRI] and other information from a corpus of data. A knowledge graph storage 131 stores initial graph ontologies (i.e., graph schemas), where a graph ontology may be domain specific or otherwise customized for particular applications. A graph ontology is a high-level schema defining how various entities are related. For example, the graph ontology includes the information for defining all entity types, edge types, and their hierarchies for a specific knowledge graph. A graph ontology may be referred to as a graph schema within this disclosure.” extracting, by the computing device, one or more concepts and one or more connectors by classifying each entity of the synthesized first set of entities as a concept or a connector, based on a pre-existing knowledge model, wherein the pre-existing knowledge model comprises a set of concepts, a set of connectors, and domain values associated with the set of concepts and the set of connectors A connector is interpreted as a relation supported by [Current Application 0013]: “Connector - Connectors imply how concepts may be related to one another. For instance, color of a sweatshirt, size of a dress, customer likes a dress etc.” The extracting, classifying, and such is taught by [Tung 0013]: “Second, the knowledge graph is hydrated with information by ingesting knowledge from multiple data sources and different knowledge extraction techniques (e.g., natural language processing (NLP), schema mapping, computer visions, or the like) to create the vertices and edges [extracting, by the computing device, one or more concepts and one or more connectors by classifying each entity of the synthesized first set of entities as a concept or a connector,] in the knowledge graph. Each data source may create its own data processing pipeline for extracting data to include into the knowledge graph being constructed.”. [Tung 0023]: “The integration layer 120 implements the orchestration process via orchestration circuitry 123 for determining whether intermediary result data stored on the staging repository can be further refined. The further refinement of intermediary results may include the addition of a next level (i.e., deeper) sub-concept to the knowledge graph being constructed by an additional processing step. The KDMS [based on a pre-existing knowledge model] 1 maintains a record of expected input and output types, in terms of concepts and relationships [wherein the pre-existing knowledge model comprises a set of concepts, a set of connectors, and domain values associated with the set of concepts and the set of connectors], for each data processing pipeline. These records may be stored as pipeline metadata 122. Records are created when new data processing pipelines are onboarded to the KDMS 1. A data processing pipeline may be defined by three components: 1) an input entity definition (type and attributes), 2) a logic entity definition (type and attributes), and 3) an output entity definition (type and attributes).” identifying, by the computing device, relations between the extracted one or more concepts and the one or more connectors, based on the first data source and the pre-existing knowledge model [Tung 0024]: “A new data processing pipeline may be onboarded to the KDMS 1 by, for example, accessing a new data source. A new data processing pipeline may also be onboarded to the KDMS 1 by, for example, identifying new entity relationships. [identifying, by the computing device, relations between the extracted one or more concepts and the one or more connectors, based on the first data source and the pre-existing knowledge model] FIG. 3 shows two exemplary data processing pipelines 300, as well as a graphical representation of the orchestration process. A first data processing pipeline A receives a newsfeed as an input, and outputs a Politician entity type as an output (which may, or may not, be empty). A second data processing pipeline B includes the Politician entity type as an input, and the President entity type as an output. Both the first data processing pipeline A and the second data processing pipeline B may be running NLP to extract information from their respective data sources. Orchestration is performed by routing the resulting output (Politician entity type) from the first data processing pipeline A to be invoked by the second data processing pipeline B to process the resulting output.” An example of a knowledge structure as presented by the current invention is [Current Invention 0045]: the entities are mapped to appropriate concepts and connectors such that the configured functionalities of concept mapper and the concept annotator are fulfilled, the mapped data along with the values maybe structured in a form of a database, or any other repository (207). Alternatively, it may be stored in a json file, csv or an xml file or any other file format with appropriate functionalities to store the concepts and connectors. This represents a knowledge structure. It may be structured in a form of a knowledge model (209).”. Tung paragraph 23 gives information to show that the KDMS can qualify as the knowledge structure. [Tung 0023]: “The KDMS 1 maintains a record of expected input and output types, in terms of concepts and relationships, for each data processing pipeline. These records may be stored as pipeline metadata 122. Records are created when new data processing pipelines are onboarded to the KDMS 1.” Current invention notes possible examples of concepts ([Current Invention 0012]: “Concept - concepts may be extracted from one or more structured or unstructured data or knowledge sources. Concepts may either imply a subject or the characteristics of a subject related to which a user needs a knowledge model or a graph. For instance, a subject called apparel can itself be a concept along with the various characteristics related to the apparel. Some examples are color, size, texture, category etc.”). A “Political entity type” [Tung 0024] is noted to be a concept. generating, by the computing device, a first knowledge structure based on the identified relations [Tung 0021]: “The processing pipelines may be configured to be chained together to further refine the intermediary results to obtain a final output. [generating, by the computing device, a first knowledge structure based on the identified relations] The system described herein may not be tied to a specific data source or a specific type of data processing. Thus, the chaining of the processing pipelines is not static, but dynamic based on existing data and/or results of previous processes (i.e., the intermediary results).” extracting, by the computing device, a second set of entities from a second data source mapping the extracted second set of entities to the extracted one or more concepts, the extracted one or more connectors, and the identified relations, based on the generated first knowledge structure, wherein the second data source is associated with the first data source [Tung 0024]: “A new data processing pipeline may be onboarded to the KDMS 1 by, for example, accessing a new data source. A new data processing pipeline may also be onboarded to the KDMS 1 by, for example, identifying new entity relationships. FIG. 3 shows two exemplary data processing pipelines 300, as well as a graphical representation of the orchestration process. A first data processing pipeline A receives a newsfeed as an input, and outputs a Politician entity type as an output (which may, or may not, be empty). A second data processing pipeline B includes the Politician entity type as an input, and the President entity type as an output [mapping the extracted second set of entities to the extracted one or more concepts, the extracted one or more connectors, and the identified relations,]. Both the first data processing pipeline A and the second data processing pipeline B may be running NLP to extract information [extracting, by the computing device, a second set of entities from a second data source] from their respective data sources.” [Tung 0023]: “The integration layer 120 implements the orchestration process via orchestration circuitry 123 for determining whether intermediary result data stored on the staging repository can be further refined. [based on the generated first knowledge structure,] The further refinement of intermediary results may include the addition of a next level (i.e., deeper) sub-concept to the knowledge graph being constructed by an additional processing step. The KDMS 1 maintains a record of expected input and output types, in terms of concepts and relationships, for each data processing pipeline. These records may be stored as pipeline metadata 122.” [Tung Figure 3] helps show that aspects of the first knowledge structure are used as input for the extraction of entities. PNG media_image1.png 302 585 media_image1.png Greyscale [Tung 0021]: “The processing pipelines may be configured to be chained together to further refine the intermediary results to obtain a final output. The system described herein may not be tied to a specific data source or a specific type of data processing. Thus, the chaining of the processing pipelines is not static, but dynamic based on existing data [wherein the second data source is associated with the first data source] and/or results of previous processes (i.e., the intermediary results).” The second data source being associated with the first data source is shown by the quote above, as the quote notes that pipelines are linked together based on existing data or previous processes (intermediary results). If a link is created based on previous data or data from the other pipelines (which is the extracted data from other sources), then the sources must have some form of relation or association which each other that could be found. This means Tung shows extracting and identifying data between sources where the sources are associated with one another. converting, by the computing device, the mapping of the extracted second set of entities into one or more data structures generating, by the computing device, a second knowledge structure from the converted one or more data structures [Tung 0022]: “Each intermediary result may be data that comprises a portion [converting, by the computing device, the mapping of the extracted second set of entities into one or more data structures] of the knowledge graph being constructed [generating, by the computing device, a second knowledge structure from the converted one or more data structures]. An intermediary result data stored on the staging repository may be further refined or ingested into the knowledge graph being constructed when the processing engine 115 determines further refinement is not needed.” updating, by the computing device, a knowledge graph store based on the generated second knowledge structure [Tung 0023]: “The integration layer 120 implements the orchestration process via orchestration circuitry 123 for determining whether intermediary result data stored on the staging repository can be further refined. The further refinement of intermediary results may include the addition of a next level (i.e., deeper) sub-concept to the knowledge graph being constructed by an additional processing step. The KDMS 1 maintains a record [updating, by the computing device, a knowledge graph store based on the generated second knowledge structure] of expected input and output types, in terms of concepts and relationships, for each data processing pipeline. These records may be stored as pipeline metadata 122. Records are created when new data processing pipelines are onboarded to the KDMS 1.” This notes that data from the processes is added to the records. Tung Figure 7 also notes storing knowledge graph data on part 708 for an alternative mapping of storing knowledge graphs. Further support of storage for knowledge graphs is in paragraph 37 of Tung noting “The KDMS 1 further includes the graph layer 130 comprised of the knowledge graph storage 131. The knowledge graph storage 131 stores entities (nodes), relationships (edges), and attributes (node/edge properties).” Tung does not explicitly teach: the mapping includes calculating respective distances between each of the second set of entities and one of the extracted one or more concepts or one of the extracted one or more connectors Paulheim teaches: the mapping includes calculating respective distances between each of the second set of entities and one of the extracted one or more concepts or one of the extracted one or more connectors [Paulheim 5.2.1 page 9 of pdf]: "Apriosio et al. [4] use types of entities in different DBpedia language editions (each of which can be understood as a knowledge graph connected to the others) as features for predicting missing types. The authors use a k-NN classifier with different distance measures [the mapping includes calculating respective distances between each of the second set of entities and one of the extracted one or more concepts or one of the extracted one or more connectors] (i.e., kernel functions), such as the overlap of two articles’ categories. In their setting, a combination of different distance measures is reported to provide the best results." One of ordinary skill in the art, prior to the effective filing date, would have been motivated to combine Tung and Paulheim. Tung and Paulheim are in the same field of endeavor of knowledge graphs or knowledge models. One of ordinary skill in the art would have been motivated to combine Tung and Paulheim to utilize a distance measure or calculation in order to increase coverage or information in a knowledge graph ([Paulheim 5 page 8 of pdf]: "Completion of knowledge graphs aims at increasing the coverage of a knowledge graph. Depending on the target information, methods for knowledge graph completion either predict missing entities, missing types for entities, and/or missing relations that hold between entities"). Regarding Claim 3: The method of claim 1 is taught by Tung and Paulheim. Tung teaches: wherein the pre-existing knowledge model is are augmented with the generated first knowledge structure [Tung 0022]: “Each intermediary result may be data that comprises a portion of the knowledge graph being constructed. An intermediary result data stored on the staging repository may be further refined or ingested into the knowledge graph being constructed [wherein the pre-existing knowledge model is are augmented with the generated first knowledge structure] when the processing engine 115 determines further refinement is not needed.” Further support for this is given in [Tung 0023]: “The integration layer 120 implements the orchestration process via orchestration circuitry 123 for determining whether intermediary result data stored on the staging repository can be further refined. The further refinement of intermediary results may include the addition of a next level (i.e., deeper) sub-concept to the knowledge graph being constructed by an additional processing step. The KDMS 1 maintains a record of expected input and output types, in terms of concepts and relationships, for each data processing pipeline. These records may be stored as pipeline metadata 122. Records are created when new data processing pipelines are onboarded to the KDMS 1.” This notes that data from the intermediary results is added to the records of KDMS during the construction of a knowledge graph. Regarding Claim 4: The method of claim 1 is taught by Tung and Paulheim. Tung teaches: further comprising identifying, by the computing device, a set of values related to the extracted one or more concepts and the one or more connectors from the first data source [Tung 0021]: “For an example of name entity extraction from a newspaper, in such context, politician entities that are extracted from the newspaper may be considered the intermediary result. Now some politician entities that are extracted may be further processed and classified as a specific politician, e.g., President. Then the President and the remaining set of Politicians from the intermediary results are considered to be the final results.” [Tung 0025]: “The integration layer 120 further implements resolution processing of the intermediary results via resolution circuitry 124. Resolution processing observes the intermediary results and attempts to resolve different expressions of a same entity with information obtained from external data sources [further comprising identifying, by the computing device, a set of values related to the extracted one or more concepts and the one or more connectors from the first data source] 126. For example, FIG. 4 illustrates an exemplary Entity 400 where three different expressions may be attempting to define the same Entity (44th President of the United States, President Obama, Obama). To resolve such situations where different expressions are found for a same real-world entity, the resolution circuitry 124 executes strategies with available contextual information to resolve the different expressions attributed to the same real-world entities.” [Current Invention 0014]: “Each possible value in a domain is an instance of the concept or the connector. For instance large, medium, small maybe the domain for the concept 'size'; Adam, Smith, Sriram, or employee numbers maybe the domain for the concept 'customer'.” The example idea of what a value is from Current Invention 0014 is present in Tung 0025. Tung 0025 shows that some of what Tung calls entities are considered values by Current Invention (ex. Obama is an instance of the concept 44th President of the United States). Regarding Claim 5: The method of claim 1 is taught by Tung and Paulheim. Tung teaches: further comprising removing, by the computing device, noise from the first data source for extracting the one or more concepts and the one or more connectors [Tung 0026]: “When the resolution processing fails to resolve the entities properly due to lack of available contextual information, the resolution circuitry 124 may perform analytical queries on the knowledge graph to generate a candidate pair of entities to be pruned [further comprising removing, by the computing device, noise from the first data source for extracting the one or more concepts and the one or more connectors] or merged, and calculate a similarity using common connected entities of the candidate entity pair.” Noise is noted by the current invention to be data that can be construed as ambiguous ([Current Invention 0040]: “In an embodiment, the unrequired data which could be a concept, connector or part of a concept or connector's domain which may have been extracted by the data extractor (201) or classified as an entity by the synthesizer (202) is flagged as a blacklisted word or noise (204). Noise may also include any ambiguous data, or any other unrequired text which is not relevant for the knowledge model or the knowledge graph. This data may be used to retrain the data extractor (201) component in the form of refinements through natural language or regular expressions and other related technologies.”), which is akin to data that difficult to resolve or lacks context as stated in [Tung 0026]. Regarding Claim 6: Tung teaches: comprising a processor and a memory comprising instructions executable by the processor to cause the system to perform operations [Tung 0041]: “As just one example, the system circuitry 604 may include one or more instruction processor 618 and memory 620 [comprising a processor and a memory comprising instructions executable by the processor to cause the system to perform operations].” The rest of this claim is analogous to claim 1. Regarding Claim 8: The cognitive platform of claim 7 is taught by Tung and Paulheim. This claim is analogous to claim 3. Regarding Claim 9: The cognitive platform of claim 6 is taught by Tung and Paulheim. This claim is analogous to claim 4. Regarding Claim 10: The cognitive platform of claim 6 is taught by Tung and Paulheim. This claim is analogous to claim 5. Regarding Claim 11: Tung teaches: A non-transitory computer readable medium with instructions stored thereon that, when executed by a processor, cause the processor to perform operations [Tung Claim 18]: “A system comprising: a machine-readable medium, other than a transitory signal [A non-transitory computer readable medium]; and instructions stored on the machine-readable medium that, when executed by processing circuitry [with instructions stored thereon that, when executed by a processor, cause the processor to perform operations]” The rest of this claim is analogous to claim 1. Regarding Claim 13: The cognitive platform of claim 12 is taught by Tung and Paulheim. This claim is analogous to claim 3. Regarding Claim 14: The cognitive platform of claim 11 is taught by Tung and Paulheim. This claim is analogous to claim 4. Regarding Claim 15: The cognitive platform of claim 11 is taught by Tung and Paulheim. This claim is analogous to claim 5. Regarding Claim 16: The method of claim 1 is taught by Tung and Paulheim. Tung teaches: the one or more data structures include one or more tuples [Tung 0034]: “Where X and Y are concepts and ->α is the relationship, all defined in a schema. Tp represents the pattern tuple [the one or more data structures include one or more tuples] for X ->α Y. The relationships α’s strength is evaluated by the following two formulations…” and updating the knowledge graph store includes at least one of inserting the one or more tuples as one or more nodes or one or more edges in a graph store or updating one or more nodes or one or more edges in the graph store to correspond to the one or more tuples Updating is taught by the updating limitation at end of claim 1 by Tung 23 [and updating the knowledge graph store includes] and 37. Tung 0034 shows that tuples [at least one of inserting the one or more tuples] are used and thus the updating using tuples is taught by the knowledge a tuple can represent the elements being stored in the knowledge graph store, especially when known in the context of Tung 0037 noting that a knowledge graph is composed of the wanted elements of edges and nodes [Tung 0037]: “The KDMS 1 further includes the graph layer 130 comprised of the knowledge graph storage 131. The knowledge graph storage 131 stores entities (nodes), relationships (edges), [as one or more nodes or one or more edges in a graph store or updating one or more nodes or one or more edges in the graph store to correspond to the one or more tuples] and attributes (node/edge properties).” Regarding Claim 17: The system of claim 6 is taught by Tung and Paulheim. This claim is analogous to claim 16. Regarding Claim 18: The computer readable medium of claim 11 is taught by Tung and Paulheim. This claim is analogous to claim 16. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “Knowledge graph embedding with concepts” by Guan et al presents a disclosure that notes connecting entities to concepts while noting that doing so can improve understanding of concepts that use the same word, such “apple” being both a fruit and a company. Method and Apparatus for Generating Knowledge Graph by Kim et al (US 20220156468 A1) notes the idea of removing noise in relation to concepts and relationships. US 20190087755 A1 by Hull et al notes having a human in the loop to assist the knowledge graph creation. The specification of the current invention also notes this idea. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER D DEVORE whose telephone number is (703)756-1234. The examiner can normally be reached Monday-Friday 7:30 am - 5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.D.D./Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Dec 10, 2021
Application Filed
Jan 23, 2025
Non-Final Rejection — §101, §103
May 02, 2025
Response Filed
Jul 08, 2025
Final Rejection — §101, §103
Oct 03, 2025
Request for Continued Examination
Oct 14, 2025
Response after Non-Final Action
Dec 30, 2025
Non-Final Rejection — §101, §103
Feb 19, 2026
Applicant Interview (Telephonic)
Feb 19, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530603
OBTAINING AND UTILIZING FEEDBACK FOR AGENT-ASSIST SYSTEMS
2y 5m to grant Granted Jan 20, 2026
Patent 12505355
GENERAL FORM OF THE TREE ALTERNATING OPTIMIZATION (TAO) FOR LEARNING DECISION TREES
2y 5m to grant Granted Dec 23, 2025
Patent 12468978
Reinforcement Learning In A Processing Element Method And System Thereof
2y 5m to grant Granted Nov 11, 2025
Patent 12412069
COOKIE SPACE DOMAIN ADAPTATION FOR DEVICE ATTRIBUTE PREDICTION
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
92%
With Interview (+41.7%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month