DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is in response to the Amendment filed 09/29/2025.
Response to Arguments
Claims 1 – 20 are pending in this Office Action. After a further search and a thorough examination of the present application, claims 1 – 20 remain rejected.
Applicant's arguments filed with respect to claims 1 – 20 have been fully considered and they are moot in view of new rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 2018/0173808 A1) (‘Sharma’ herein after) further in view of Chaudhuri et al. (US 2020/0394190 A1) (‘Chaudhuri’ herein after) further in view of Raviv et al. (US 2025/0200034 A1) (‘Raviv’ herein after).
With respect to claim 1, 11, 16,
Sharma discloses a method comprising: determining, from a natural language query, a query language from a plurality of query languages based, at least in part, on an intent of the natural language query (figures 1, 18, 19 and paragraphs 46 – 53 teach identifying intent of a natural language query and intent identifier operates to map extracted entities to a domain model and the domain model may be used by the domain binder to identify equivalent actions and entities of the domain model with respect to the extracted actions and entities from the query. The mapped actions and entities may be encoded along with temporal logic into a domain-specific language representation of intent, Sharma); generating a first prompt for a first language corresponding to the query language, wherein the first prompt at least indicates a grammar for the query language and instructions to convert the natural query language to a database query satisfying the grammar, wherein the grammar at least comprises syntax of the query language (paragraphs 48 – 58 teaches identification of the intent for the query, the intent identifier may generate an intent model, the intent model may be generated in a domain specific language representation which may be described as a custom designed language with defined syntax, semantics, and grammar rules, applied to a particular domain to serve an intended purpose. The domain specific language representation of the intent model (denoted “intent domain specific language representation 126”) may be forwarded, Sharma); prompting the first language model with the first prompt to obtain a first database query as output and based on determining that the first database query is a valid query for the query language, retrieving data that satisfies the first database query (figures 1, 18, 19, paragraphs 46 – 58 teach the language model query using bots to retrieve results, Sharma).
Sharma teaches natural language transformed to a query language but specifically does not disclose as claimed different query languages.
However, Chaudhuri teaches different query languages from the natural input language in figures 1, 2 and paragraphs 5 teaching a method receiving a natural language user input, determining a user intent, generating an intermediate query and translating the intermediate query to a database query and executing the database query against a database, generating a result output that comprises results of the database query in natural language. Furthermore, paragraph 38 in describing 2 teaches determining user intent using NL parsers and context information and then the user intent may be used to determine an intermediate query through graphs and/or pre-trained ML models. The graph nodes and relationships may be parsed into a database query. Paragraphs 44 – 49 teach intent based intermediate query translation and the database query syntax is chosen accordingly.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sharma with the teachings of Chaudhuri because they are in the same field of study, namely, natural language processing and queries. Furthermore, Chaudhuri ’s method improves the method of Sharma because of automated processing and translation of natural language-based user queries into graph database queries that facilitate ease-of-use and efficient user access. It provides a user friendly dynamic tool for business intelligence automated analytics that enable end users to maximize efficiency and flexibility while interacting with data systems across applications, it provides real-time analytics based on Artificial Intelligence (AI) and an interactive Chatbot interface, paragraphs 14 – 18, Chaudhuri .
Sharma in combination with Chaudhuri teaches domain specific language model and using bots based query guidance but does not specifically disclose results.
However, Raviv teaches large language model (LLM) configured to convert the natural language request to a suitable query capable of being run against the campaign data sets and to run the query against the data sets, to receive output from the query, to encode relevant query output together with the natural language request or the suitable query into an encoded output and to decode the encoded output to generate a natural language response to the natural language request in paragraphs 44 – 49.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sharma and Chaudhuri with the teachings of Raviv because they are in the same field of study, namely, natural language processing and queries. Furthermore, Raviv teaches that to further improve the efficiency, accuracy, and obtaining and processing user input with respect to accessing and using the data, and provide for simple and effective data mining and insight generation of large campaign data sets using only natural language requests and/or without using data store information requests expressed in programming language. Using a dedicated campaign large language model (LLM) connected to a data store of campaign related data, the systems and methods allow for a user to use natural language requests to effectively mine campaign related data and to receive natural language responses in return paragraphs 36 – 40, Raviv.
With respect to claim 2, 12, 17,
Sharma as modified discloses the method of claim 1 further comprising, based on determining that the natural language query is not a valid query for the query language, generating a second prompt, wherein the second prompt indicates instructions to generate a database query similar to valid database queries for the query language and prompting the first language model with the second prompt to obtain a second database query as output (paragraphs 46 – 58, Sharma).
With respect to claim 3, 13, 18,
Sharma as modified discloses the method of claim 2, wherein generating the second prompt and prompting the first language model with the second prompt comprises: identifying one or more second natural language queries that are semantically similar to the natural language query and one or more second database queries corresponding to the one or more second natural language queries, wherein the one or more second database queries are valid queries for the query language; generating the second prompt for the first language model to indicate the one or more second natural language queries in association with corresponding ones of the one or more second database queries, wherein the instructions to generate a database query similar to valid database queries for the query language comprise instructions to generate a database query similar to the one or more second database queries and prompting the first language model with the second prompt to obtain the second database query as output (paragraphs 48 – 56, Sharma and figures 1, 2, paragraphs 38, 44 – 48 and 54, Chaudhuri).
With respect to claim 4, 14, 19,
Sharma as modified discloses the method of claim 2, further comprising: determining that the second database query is a valid query for the query language and retrieving data that satisfies the second database query (figures 1, 2, paragraphs 38, 44 – 48 and 54, Chaudhuri).
With respect to claim 5, 15, 20,
Sharma as modified discloses the method of claim 1, further comprising determining whether the first database query is a valid query for the query language, wherein determining whether the first database query is a valid query for the query language comprises applying a lint program to the first database query (paragraphs 46 – 58, Sharma and figures 1, 2, paragraphs 38, 44 – 48 and 54, Chaudhuri).
With respect to claim 6,
Sharma as modified discloses the method of claim 1, wherein the first language model comprises a large language model (paragraphs 44 – 49, Raviv).
With respect to claim 7,
Sharma as modified discloses the method of claim 1, wherein the retrieved data indicates one or more assets related to one or more vulnerabilities indicated in the natural language query and a graph structure between the one or more assets and the one or more vulnerabilities (figures 1, 2, paragraphs38, 44 – 48 and 54, Chaudhuri).
With respect to claim 8,
Sharma as modified discloses the method of claim 7, wherein the graph structure indicates chains of exposure between assets in the one or more assets to corresponding ones of the one or more vulnerabilities (figures 1, 2, paragraphs38, 44 – 48 and 54, Chaudhuri).
With respect to claim 9,
Sharma as modified discloses the method of claim 7, further comprising generating a summary of the graph structure with a second language model (figures 1, 2, paragraphs38, 44 – 48 and 54, Chaudhuri).
With respect to claim 10,
Sharma as modified discloses the method of claim 1, wherein determining the query language based, at least in part, on the intent of the natural language query comprises, predicting the intent of the natural language query and associating the intent with a domain from a plurality of domains corresponding to the query language (paragraphs 46 – 58, Sharma).
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20240143584 A1 teaches receiving a complex question from a user and decomposing the complex question into one or more sub-questions using one or more of but not limited to coreference resolution, dependency parsing, chunking and prompt techniques. The method may further include generating code scripts for each sub-question using a Large Language Model (LLM).
US 20210216928 A1 teaches performing natural language processing to categorize threats to assets and perform correlated risk analyses for threats.
US 20230004562 A1 teaches an automated analysis of business intelligence, receiving natural language input from a user; evaluating the intent using parser and an interpreter and generating a query based on a context manager, a scrolling of the user through the results of the query to refine based on the user interest in the one or more portions of the results of the query, an output of the results of the query.
US 20220382752 A1 teaches mapping natural language to queries using a query grammar.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAVNEET K GMAHL whose telephone number is 571-272-5636.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SANJIV SHAH can be reached on . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAVNEET GMAHL/Examiner, Art Unit 2166 Dated: 1/23/2024
/SANJIV SHAH/Supervisory Patent Examiner, Art Unit 2166