Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Petition to Revive
Applicant petition to revive the application under abandoned for failure to reply on is granted on 01/20/2026.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed application, Application No. 63/122529, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. The written description and drawing(s) of the provisional application does not adequately support and enable the subject matter claimed in the nonprovisional application that claims the benefit of the provisional application ( see MPEP 211.05 I. A. Claiming the Benefit of Provisional Applications: Under 35 U.S.C. 119(e), the written description and drawing(s) (if any) of the provisional application must adequately support and enable the subject matter claimed in the nonprovisional application that claims the benefit of the provisional application. In New Railhead Mfg., L.L.C. v. Vermeer Mfg. Co., 298 F.3d 1290, 1294, 63 USPQ2d 1843, 1846 (Fed. Cir. 2002), the court held that for a nonprovisional application to be afforded the benefit date of the provisional application, "the specification of the provisional must ‘contain a written description of the invention and the manner and process of making and using it, in such full, clear, concise, and exact terms,’ 35 U.S.C. 112¶1, to enable an ordinarily skilled artisan to practice the invention claimed in the nonprovisional application."). The provisional application written description and drawings do not support the claim subject matter of, i.e. claim 1, a file or derivative of the file as input for a search for information comprises scientific information, engineering information or medical information and private information of the user; claim 6, “…the extraction of the data is performed by a data extraction engine hosted by a second server of the computer network.”; claim 10, “…wherein at least part of the data extracted from the file is translated, by the computing system of the computer network, from a first language to a second language before being used by the search engine to perform the search.”; claim 16, “…wherein the plurality of files comprise document files, data files, electronic copies of journal articles, electronic copies of patents and patent applications, image files, video files, audio files, and electronic copies of forms of data visualizations including data plots.”. All the other claims 2-5, 7-9, 11-15 and 17-20 similarly do not have support of the subject matter in the provisional application drawings and specification.
Accordingly, claims 1-20 are not entitled to the benefit of the prior provisional application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/01/2021 is considered by the examiner.
Drawings
The drawing submitted on 12/01/2021 is considered by the examiner.
Response to Amendment
Claims 1-5, and 7-17 are currently pending in the application and among them claim 1 is independent claim and has been amended.
Response to Arguments
Applicant's arguments filed on 1/20/2026 have been fully considered but they are not persuasive. Following are the examiner response to applicant argument:
Applicant Argument: Josephson fails to disclose both "a search engine hosted by a server," and "a data extraction engine hosted by a second server," as recited in claim 1. Therefore, Josephson does not anticipate claim 1 as amended with the features of claim 6 as originally filed (now canceled), and a prima facie case of unpatentability has not been established.
Examiner Response: Examiner respectfully disagree with applicant simple conclusion without fully reviewing prior art teaching. Josephson clearly teaches all the limitation along with the applicant argued limitation of claim 1, "a search engine hosted by a server," and "a data extraction engine hosted by a second server,". Server under broadest reasonable interpretation (BRI) and definition by google is “a computer or computer programs that manages access to a centralized resource or service in a network”. Examiner in the rejection of the office action cited the prior art teaching with reference to “search engine hosted by a server of the computer” as Fig.1A, client device 104 as server and server 102, as a search engine. However, examiner made an inadvertent mistakes on the office action on mapping the limitation a data extraction engine hosted by a second server” in the rejection of claim 6, which examiner intended to map “a data extraction engine hosted by a second server” as Fig. 1A, MRaaS module 126 to be a second server and “Infer engine 132” to be “a data extraction engine”.
Examiner interpretation clearly supported by the Josephson teaching of the claimed limitation in [0005] The MRaaS module can rely on an artificial intelligence (AI) system that employs ontologies, knowledge graphs, reasoning through use of an inference engine and an explainability module, and clinical trial data provided by external sources, e.g., clinical trial expert knowledge. More specifically, the MRaaS module can generate and provide reasoning and identified sources behind the response to the user using the inference engine and the explainability module. [0006] In some implementations, the computing system utilizes the MRaaS module and an NLP module to derive a response to the query. [0034] The information extracted from the NLP module is provided to the MRaaS module for querying and providing a response to the query. In particular, the MRaaS module can derive a response using various ontologies, knowledge graphs, and other historical medical data. Typically, the queries provided to the computing system are focused on complex medical terms that are relevant to a clinical study, a clinical procedure, or clinical research to name a few examples. The MRaaS module can provide a response to the query as well as data identifying how the MRaaS module derived at the response. [0041] During stage (B), the server 102 receives the query 106 from the client device 104. [0064] During stage (D), the server 102 communicates through an application programmable interface (API) 124 to provide data associated with the query 106 to the MRaaS module 126. The API 124 corresponds to a source code interface that enables a system, such as server 102, to access functions and other items from a software application. [0066] In response to the server 102 providing the inputs via the API 124 to the MRaaS module 126, the MRaaS module 126 can perform the process of generating a response to the query 106. [0067] During stage (E), the server 102 executes the functions of the MRaaS module 126. The MRaaS module 126 can include one or more modules, such as ontology module 128, a knowledge graph module 130, an inference engine 132, and an explainability module 134. Each of the modules 128, 130, and 134 and engine 132 can communicate with one another. Additionally, each of the modules of the MRaaS module 126 may communicate with the various databases connected to the server 102, e.g., the knowledge graph database 110, the ontology knowledge base 112, the historical database 114, and the inference database 116. [0074] In response to the ontology module 128 identifying the ontologies that match to the contents of the query, e.g., entities 118, relationships 120, and profile 122, the ontology module can transmit the identified ontologies to the knowledge graph module 130. [0079] In response to identifying a knowledge graph, the knowledge graph module 130 can transmit data representing the knowledge graphs to the inference engine 132. In some implementations, the inference engine 132 can process the data in the received knowledge graph and generate results for the query.
Therefore, examiner believe the applicant’s arguments are not persuasive and the rejection remain same with further clarification on the prior art teaching of claim 1 rejection.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-5, and 7-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Josephson et al.(US 2023/0017672 A1).
Regarding Claim 1, Josephson et al. teach: A computer network (Fig.1A system 100, or Fig. 4 computers connected locally or connected over a network), comprising: a plurality of computing devices ([0036] In some implementations, system 100 includes a client device 104, a server 102, a knowledge graph database 110, an ontology knowledge base 112, a historical database 114, and an inference database 116.The server 102 can include one or more computers connected locally or connected over a network. ); and a search engine (Fig.1A, server 102) hosted by a server (Fig.1A, client device 104) of the computer network and configured to: receive, via the computer network, a file or a derivative of the file (query researching information or an answer to a particular question i.e. a complex medical topic) as input for a search for information stored in the computer network (knowledge graph database 110, the ontology knowledge base 112, the historical database 114, and the inference database 116 ) and outside of the computer network (access to websites and other cloud storage components)([0038] During stage (A), a user 103 provides a query 106 to the server 102. The user 103 may correspond to a medical clinician, e.g., doctor, medical researcher, nurse, or other, which seeks an answer to a particular question. For example, the user 103 may be researching a complex medical topic and provide a query that recites…[0041] During stage (B), the server 102 receives the query 106 from the client device 104. [0049] During stage (C), the server 102 accesses and retrieves data from the knowledge graph database 110, the ontology knowledge base 112, the historical database 114, and the inference database 116. The server 102 can access each of these data stores, e.g., databases, warehouses, data links, and others, for providing additional information to the MRaaS module 126. [0062] In some implementations, the historical database 114 can provide the server 102 access to external medical documents. Additionally, the historical database 114 can provide the server access to websites and other cloud storage components that enable the server 102 to search for various medical data, e.g., medical journals, texts, and other peer-reviewed documents.), wherein the derivative of the file comprises data extracted from the file, and wherein the extraction of data is performed by a data extraction engine (inference engine 132) hosted by a second server (MRaaS module 126) of the computer network ([0005] The MRaaS module can rely on an artificial intelligence (AI) system that employs ontologies, knowledge graphs, reasoning through use of an inference engine. [0034] In particular, the MRaaS module can derive a response using various ontologies, knowledge graphs, and other historical medical data. [0064] During stage (D), the server 102 communicates through an application programmable interface (API) 124 to provide data associated with the query 106 to the MRaaS module 126. [0066] In response to the server 102 providing the inputs via the API 124 to the MRaaS module 126, the MRaaS module 126 can perform the process of generating a response to the query 106. [0079] In response to identifying a knowledge graph, the knowledge graph module 130 can transmit data representing the knowledge graphs to the inference engine 132. In some implementations, the inference engine 132 can process the data in the received knowledge graph and generate results for the query.); and perform the search for information using information sources external to the computer network and internal to the computer network ([0036] The server 102 can include one or more computers connected locally or connected over a network. [0049] During stage (C), the server 102 accesses and retrieves data from the knowledge graph database 110, the ontology knowledge base 112, the historical database 114, and the inference database 116. The server 102 can access each of these data stores, e.g., databases, warehouses, data links, and others, for providing additional information to the MRaaS module 126. [0062] In some implementations, the historical database 114 can provide the server 102 access to external medical documents. Additionally, the historical database 114 can provide the server access to websites and other cloud storage components that enable the server 102 to search for various medical data, e.g., medical journals, texts, and other peer-reviewed documents. [0096] The server 102 can retrieve one or more ontologies from the ontology knowledge base 112 based on the determined characteristics of the user that transmitted the query. For example, the server 102 can retrieve local ontologies 145 on a similar network as the server 102 or retrieve other ontologies 146 on external networks over the Internet or another local network.), wherein information to be searched by the search engine comprises scientific information, engineering information, or medical information([0033] In particular, a computing system can receive a query from a user or other structured and unstructured datasets, e.g., external medical databases, medical journals, textbooks, and other data sources, using a natural language processor (NLP) module and other deep learning algorithms. [0038] During stage (A), a user 103 provides a query 106 to the server 102. The user 103 may correspond to a medical clinician, e.g., doctor, medical researcher, nurse, or other, which seeks an answer to a particular question. For example, the user 103 may be researching a complex medical topic and provide a query that recites…), wherein the information to be searched by the search engine comprises private information of a user using the computer network(0039] In some implementations, the client device 104 transmits additional information to the server 102 in the query 106. The additional information can include information about the user, such as credentials of the user 103 for authorization to communicate with server 102, e.g., username and password, data identifying the client device 104, and a location of the client device 104, e.g., locational coordinates. The additional information can also include a job position of the user 103, e.g., the user's job title, a characterization of the user's job responsibilities, a job title at the particular location, or a job category. For example, the job position may correspond to a nurse, a researcher, a doctor, a secretary, or some other position.), and wherein the information to be searched by the search engine is information obtained from a plurality of files comprising scientific information, engineering information, or medical information ([0062] In some implementations, the historical database 114 can provide the server 102 access to external medical documents. The external medical documents can correspond to structured and unstructured data. The structured data can include various types of medical publications and health-related text, such as medical textbooks, online publications relating to healthcare, medical journals, electronic publications, medical treatises, web-based articles, medical websites, or various resources of information that can be formatted for data extraction and processing by a computer system, e.g., the server 102. The unstructured data may correspond to different datasets that relate to medical activities, patient medical records, or healthcare transactions. Additionally, the historical database 114 can provide the server access to websites and other cloud storage components that enable the server 102 to search for various medical data, e.g., medical journals, texts, and other peer-reviewed documents.).
Regarding Claim 2, Josephson et al. teach: The computer network of claim 1, wherein the file is an electronic document file (See rejection of claim 1 and [0062]).
Regarding Claim 3. Josephson et al. teach: The computer network of claim 1, wherein the file is an electronic file outputted by an Internet of Things device (See rejection of claim 1, [0036], and [0079] In response to identifying a knowledge graph, the knowledge graph module 130 can transmit data representing the knowledge graphs to the inference engine 132. In some implementations, the inference engine 132 can process the data in the received knowledge graph and generate results for the query. [0096] For example, the server 102 can retrieve local ontologies 145 on a similar network as the server 102 or retrieve other ontologies 146 on external networks over the Internet or another local network. [0145] Computing device 400 and 450 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.).
Regarding Claim 4. Josephson et al. teach: The computer network of claim 1, wherein the file is outputted from a database or a database management system
(MRaaS module 126) (See rejection of claim 1 and [0060] In some implementations, the historical database 114 includes one or more databases that store previously generated and accessed data for generating the response to the query. Generally, the historical database 114 stores previous queries, previous generated responses associated with the previous queries, previously determined entities and relationships associated with the previous queries, and other previously generated data. This can include corresponding ontologies, knowledge graphs, inference engines, previously generated explainability models, and paths produced by the previously generated explainability models, each of these associated with a previously received query and a generated response, to name a few examples. [0065] In some implementations, the server 102 can provide data associated with the query 106 to the MRaaS module 126 through the API 124. [0067] During stage (E), the server 102 executes the functions of the MRaaS module 126. The MRaaS module 126 can include one or more modules, such as ontology module 128, a knowledge graph module 130, an inference engine 132, and an explainability module 134. Each of the modules 128, 130, and 134 and engine 132 can communicate with one another. Additionally, each of the modules of the MRaaS module 126 may communicate with the various databases connected to the server 102, e.g., the knowledge graph database 110, the ontology knowledge base 112, the historical database 114, and the inference database 116.).
Regarding Claim 5. Josephson et al. teach: The computer network of claim 1, wherein the file is stored, by a computing system (MRaaS module 126) of the computer network, in memory of a cloud computing infrastructure prior to being inputted into the search engine (See rejection of claim 1 and [0060] In some implementations, the historical database 114 includes one or more databases that store previously generated and accessed data for generating the response to the query. [0062] In some implementations, the historical database 114 can provide the server 102 access to external medical documents. Additionally, the historical database 114 can provide the server access to websites and other cloud storage components that enable the server 102 to search for various medical data, e.g., medical journals, texts, and other peer-reviewed documents. [0065] In some implementations, the server 102 can provide data associated with the query 106 to the MRaaS module 126 through the API 124. [0067] During stage (E), the server 102 executes the functions of the MRaaS module 126. The MRaaS module 126 can include one or more modules, such as ontology module 128, a knowledge graph module 130, an inference engine 132, and an explainability module 134. Each of the modules 128, 130, and 134 and engine 132 can communicate with one another. Additionally, each of the modules of the MRaaS module 126 may communicate with the various databases connected to the server 102, e.g., the knowledge graph database 110, the ontology knowledge base 112, the historical database 114, and the inference database 116. ).
Regarding Claim 7. Josephson et al. teach: The computer network of claim 1, wherein the information to be searched is combined, by a computing system of the computer network, in an information storage and retrieval system searchable by the search engine ( See rejection of claim 1 and [0036] In some implementations, system 100 includes a client device 104, a server 102, a knowledge graph database 110, an ontology knowledge base 112, a historical database 114, and an inference database 116. The server 102 can include one or more computers connected locally or connected over a network. The one or more computers of the server 102 can include an NLP module 108, one or more application programmable interfaces (APIs) 124, and an MRaaS module 126. The server 102 is configured to communicate with the client device 104 over a network. [0037] The server 102 can communicate with the databases or data stores 110, 112, 114, and 116 to obtain medical, ontology, knowledge systems, and other information for generating the response to the query.).
Regarding Claim 8. Josephson et al. teach: The computer network of claim 1, wherein the plurality of files is retrieved, by a computing system of the computer network, from sources that are in computer networks internal and external to the computer network (See rejection of claim 1 and [0037] The server 102 may include one or more computers connected locally or over a network. The server 102 may also communicate with the knowledge graph database 110, the ontology knowledge base 112, the historical database 114, and the inference database 116. The server 102 can communicate with the client device 104 to obtain a query, provide results of the query, obtain feedback from the results, and provide new results of from the feedback, for example. The server 102 can also communicate with the client device 104 for other purposes, such as authorization and login purposes. The server 102 can communicate with the databases or data stores 110, 112, 114, and 116 to obtain medical, ontology, knowledge systems, and other information for generating the response to the query. [0049] During stage (C), the server 102 accesses and retrieves data from the knowledge graph database 110, the ontology knowledge base 112, the historical database 114, and the inference database 116. The server 102 can access each of these data stores, e.g., databases, warehouses, data links, and others, for providing additional information to the MRaaS module 126.).
Regarding Claim 9, Josephson et al. teach: The computer network of claim 1, wherein concepts and knowledge entities are extracted, by a computing system of the computer network, from text within the plurality of files using a language processing and language understanding process (See rejection of claim 1 and [0031] Many sources of this medical information are text-based sources and the information can be formed from a variety of elements. [0033] In this context, techniques are described in this specification for generating a predictive model that recognizes clinical terms and a relationship between the terms, and provides a response to the query using a Machine Reasoning as a Service (MRaaS) module. In particular, a computing system can receive a query from a user or other structured and unstructured datasets, e.g., external medical databases, medical journals, textbooks, and other data sources, using a natural language processor (NLP) module and other deep learning algorithms. The techniques use the NLP module to recognize, extract, and categorize medical entities, e.g., indications, drugs, diseases, procedures, etc., as well as determine relationships between the medical entities with reference to the terms that describe the entities.).
Regarding Claim 10, Josephson et al. teach: The computer network of claim 1, wherein the derivative of the file comprises data extracted, by a computing system of the computer network, from the file, and wherein at least part of the data extracted from the file is translated, by the computing system of the computer network, from a first language to a second language (convert format for structure or unstructured datasets for search or receive spoken information from a user and convert it to usable digital information) before being used by the search engine to perform the search ( [0032] When querying information from these medical sources, domain expertise is often required to recognize relevant terms, categorize the terms based on meaning, and convert the terms to a suitable format that can be then used for data querying. [0033] In this context, techniques are described in this specification for generating a predictive model that recognizes clinical terms and a relationship between the terms, and provides a response to the query using a Machine Reasoning as a Service (MRaaS) module. In particular, a computing system can receive a query from a user or other structured and unstructured datasets, e.g., external medical databases, medical journals, textbooks, and other data sources, using a natural language processor (NLP) module and other deep learning algorithms. [0042] In some implementations, the server 102 includes the NLP module 108 that is used to obtain the query 106 and process the query 106. In particular, the NLP module 108 can include one or more high-level programming languages that are programmed to recognize and extract a plurality of terms from the query 106. For example, the NLP module 108 can include a python or a java module that uses coded instructions to identify terms and words that describe medical entities for different medical entities in the query 106. [0062] The external medical documents can correspond to structured and unstructured data. The structured data can include various types of medical publications and health-related text, such as medical textbooks, online publications relating to healthcare, medical journals, electronic publications, medical treatises, web-based articles, medical websites, or various resources of information that can be formatted for data extraction and processing by a computer system, e.g., the server 102. [0153] The control interface 458 may receive commands from a user and convert them for submission to the processor 452. [0157] Device 450 may also communicate audibly using audio codec 460, which may receive spoken information from a user and convert it to usable digital information.).
Regarding Claim 11, Josephson et al. teach: The computer network of claim 1, wherein a result of the search performed by the search engine is a knowledge graph (See rejection of claim 1 and [0073] In this case, the MRaaS module 126 can determine which knowledge graphs and ontologies are available and can provide results from the available knowledge graphs and ontologies.[0074] In response to the ontology module 128 identifying the ontologies that match to the contents of the query, e.g., entities 118, relationships 120, and profile 122, the ontology module can transmit the identified ontologies to the knowledge graph module 130. [0075] In some implementations, the knowledge graph module 130 can receive the identified ontologies and identify one or more knowledge graphs that match to the identified ontologies.).
Regarding Claim 12, Josephson et al. teach: The computer network of claim 11, wherein a computing system of the computer network is configured to generate an image of the knowledge graph for display in a graphical user interface (GUI) (See rejection of claim 11 and [0108] A user, such as user 103, may be able to interact with the UI of the client device 104 to view the results and report on the data representing the path traversed. For example, the user 103 can view each of the results or response to the query by selecting and opening the results. The results may include, for example: a document, a link to a document in a database local to the server 102, a link to a document external to the server 102, such as over the Internet, a snippet of information clipped from a document, a textbook, or a journal, and can also include media, such as images, videos, and/or audio files. [0109] The data representing the path traversed can be illustrated on the UI of client device 104 as an interactive graph, an interactive structured document, or another form of GUI interaction. [0131] During stage (D), the server 222 can transmit the results 218 to the client device 210 over a network 224. The results 218 can include responses 218a to the query and the data 218b indicated by the explainability module illustrating the path traversed through the knowledge graph to identify responses 213a. The client device 210 can receive the results 218 and display the results 218 in a GUI of the client device 210.).
Regarding Claim 13, Josephson et al. teach: The computer network of claim 1, wherein the search engine is further configured to receive text input (See rejection of claim 1 and [0038] For example, the user 103 may be researching a complex medical topic and provide a query that recites “Are there cases in the last decade where patients had pericardial aortic valves inserted in the reverse position, to serve as mitral valve replacements, and how often in such cases did endocarditis or tricuspid valve infection develop, and how long after the procedure? The user 103 can interact with the client device 104 to enter the query 106 via typing, speaking, or interacting with a touchscreen of the client device 104. [0042] In some implementations, the server 102 includes the NLP module 108 that is used to obtain the query 106 and process the query 106. In particular, the NLP module 108 can include one or more high-level programming languages that are programmed to recognize and extract a plurality of terms from the query 106. The NLP module 108 can receive the query 106 and can parse the query into one or more words.).
Regarding Claim 14, Josephson et al. teach: The computer network of claim 13, wherein the search engine is further configured to: analyze the text input to determine an intent of a user providing the input, prior to performing the search; and perform the search for information using the information sources external to the computer network and internal to the computer network according to at least the determined intent of the user (See rejection of claim 13 and [0036] The server 102 can include one or more computers connected locally or connected over a network. The one or more computers of the server 102 can include an NLP module 108, one or more application programmable interfaces (APIs) 124, and an MRaaS module 126. [0038] The user 103 can interact with the client device 104 to enter the query 106 via typing, speaking, or interacting with a touchscreen of the client device 104. [0042] In some implementations, the server 102 includes the NLP module 108 that is used to obtain the query 106 and process the query 106. In particular, the NLP module 108 can include one or more high-level programming languages that are programmed to recognize and extract a plurality of terms from the query 106. The NLP module 108 can receive the query 106 and can parse the query into one or more words. For example, the NLP module 108 can parse the query 106 to identify these words: “Are,” “there,” “cases,” . . . “how,” “long,” “after,” “the,” and “procedure”, among others.[0043] In some implementations, the NLP module 108 can generate entities 118 from the parsed query 106 and relationships 120 from the parsed query 106. For example, each word or term identified in the query 106 can be searched against a data source, such as a dictionary of specific medical terms or a textbook that describes medical concepts. [0084] In this context, the inference engine 132 can logically identify results or responses to the query 106 by following a path in the knowledge graph that matches to contents or terms within the query 106. Ultimately, this path taken by the inference engine 132 enables the MRaaS module 126 to determine results that closely resembles or matches to the intent and content indicated by the user 103's query 106. [0096] The server 102 can retrieve one or more ontologies from the ontology knowledge base 112 based on the determined characteristics of the user that transmitted the query. For example, the server 102 can retrieve local ontologies 145 on a similar network as the server 102 or retrieve other ontologies 146 on external networks over the Internet or another local network.).
Regarding Claim 15, Josephson et al. teach: The computer network of claim 1, wherein the plurality of files comprise document files, data files, electronic copies of journal articles, electronic copies of patents and patent applications, image files, video files, audio files, and electronic copies of forms of data visualizations including data plots (See rejection of claim 1 and [0058] A knowledge graph is typically visualized as a graphical structure and includes various components. [0062] The structured data can include various types of medical publications and health-related text, such as medical textbooks, online publications relating to healthcare, medical journals, electronic publications, medical treatises, web-based articles, medical websites, or various resources of information that can be formatted for data extraction and processing by a computer system, e.g., the server 102. The unstructured data may correspond to different datasets that relate to medical activities, patient medical records, or healthcare transactions. Additionally, the historical database 114 can provide the server access to websites and other cloud storage components that enable the server 102 to search for various medical data, e.g., medical journals, texts, and other peer-reviewed documents. [0108] A user, such as user 103, may be able to interact with the UI of the client device 104 to view the results and report on the data representing the path traversed. The results may include, for example: a document, a link to a document in a database local to the server 102, a link to a document external to the server 102, such as over the Internet, a snippet of information clipped from a document, a textbook, or a journal, and can also include media, such as images, videos, and/or audio files. ).
Regarding Claim 16, Josephson et al. teach: The computer network of claim 1, wherein the file comprises a mass spectra (the entities and relationships represented by the nodes and edges in the knowledge graph, respectively, match to the entities and relationship based on statistics, probabilities, or other factors ) (See rejection of claim 1 and [0057] A knowledge graph, also known as a sematic network, can represent a network of real-world events, e.g., objects, events, situations, or concepts, and can illustrate the relationship between these events. A knowledge graph is typically visualized as a graphical structure and includes various components. [0140] The inference engine can traverse the identified knowledge graph to determine the conclusion based on the terms or group of terms and their corresponding meaning from the received query. The knowledge graph can match the meaning for each term from the query to entities and relationships found in the identified knowledge graph. For example, the inference engine can hop from one node to another node along an edge that connects the two nodes when the data representing these corresponding components match to the entities, relationships, and profile to a particular degree. The inference engine may determine that the entities and relationships represented by the nodes and edges in the knowledge graph, respectively, match to the entities and relationship based on statistics, probabilities, or other factors. The inference engine can repeat this iterative traversal process until one or more nodes are reached no further connecting edges. In response, the inference engine can return the values described by these nodes.).
Regarding Claim 17, Josephson et al. teach: The computer network of claim 1, wherein the file comprises a temperature profile (See rejection of claim 1 and [0062] In some implementations, the historical database 114 can provide the server 102 access to external medical documents. The external medical documents can correspond to structured and unstructured data. The unstructured data may correspond to different datasets that relate to medical activities, patient medical records, or healthcare transactions. [0098] In other examples, the server 102 can process other unstructured sources, e.g., speeches, patient data, lectures, etc., process the data from the unstructured sources into a machine-readable format understood by the MRaaS module 126, e.g., digitized data, and provide the digitized data to the MRaaS module 126. Note: It is inherent for a patient medical records or patient data to include patient temperature profile.).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art of record Kim et al.(KR 20210032245 A) teach: Patent Searching Apparatus And Method.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD K ISLAM whose telephone number is (571)270-5878. The examiner can normally be reached Monday -Friday, EST (IFP).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah can be reached on 571-270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMAD K ISLAM/Primary Examiner, Art Unit 2656