DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In the amendment filed on December, 24, 2025, the following has occurred: claim(s) 1, 3, 7-8, 10, 18-19 have been amended, claim(s) 20-23 have been added, and claim(s) 2, 4-6, have been cancelled. Now, claim(s) 1, 3, 7-23 are pending.
Notice to Applicant
The Examiner has not rejected the claims 1, 3, 7-23 under 35 U.S.C. 101. The independent claims 1, 18, 19 recite the limitations of “parsing the natural language user request to generate structured contextual data comprising: (i) a database schema definition for a clinical trial protocol database; (ii) domain knowledge definitions comprising an electronic data capture query definition; and (iii) an output format definition specifying a database query language and an associated metadata structure;” “inputting the prompt to the large language model to produce a model response comprising: (i) a database query in the specified database query language; (ii) metadata including extracted trial parameters”, and “generating one or more application programming interface (API) requests based at least in part on one or more of: the retrieved protocol data and the metadata;” could not be considered to belong to any of the abstract idea groupings of Mathematical Concepts, Certain Methods of Organizing Human Activity, and Mental Processes as the limitations improve the functioning of large language models for database querying by constraining them with specific, structured contextual data elements.
Claim Objections
Claim 7 objected to because of the following informalities: “the first database, the first database” in p. 3, ll. 12. This appears to be a typographical error. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “the first clinical trial protocol database, the first clinical trial protocol database”.
Claim 8 objected to because of the following informalities: “the first database” in p. 3, ll. 15, “the first database” in p. 3, ll. 17, “the second database” in p. 3, ll. 18-19. This appears to be a typographical error. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “the first clinical trial protocol database”.
Claim 20 objected to because of the following informalities: “the first database, the first database” in p. 6, ll. 26-27. This appears to be a typographical error. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “the first clinical trial protocol database, the first clinical trial protocol database”.
Claim 21 objected to because of the following informalities: “the first database” in p. 7, ll. 2, “the first database” in p. 7, ll. 4, “the second database” in p. 7, ll. 4-5. These appear to be typographical errors. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “the first clinical trial protocol database”, “the first clinical trial protocol database”, “the second, proprietary database”.
Claim 22 objected to because of the following informalities: “the first database, the first database” in p. 7, ll. 8. This appears to be a typographical error. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “the first clinical trial protocol database, the first clinical trial protocol database”.
Claim 23 objected to because of the following informalities: “the first database” in p. 7, ll. 11, “the first database” in p. 7, ll. 13, “the second database” in p. 7, ll. 13-14. These appear to be typographical errors. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “the first clinical trial protocol database”, “the first clinical trial protocol database”, “the second, proprietary database”.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 7-8, 10-11, 18-23 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being unpatentable over Heyder et al. (U.S. Patent Pre-Grant Publication No. 2025/0239334).
As per independent claim 1, Heyder discloses a computer-implemented method of artificial intelligence - driven clinical trial protocol data retrieval and augmentation, the method comprising:
receiving, via a user interface executed on a computer, a natural language user request specifying one or more clinical trial parameters (See [0073]-[0074]: The user can enter natural language text (e.g., sentences and/or phrases) specifying the type of document that should be generated by the engine, the subject of the document (e.g., the subject of a particular clinical trial, such as the particular medical intervention that is being tested), the type of clinical trial that is to be performed (e.g., the “phase” of the clinical trial), the intended audience of the document (e.g., a particular government agency), and/or any other information regarding the clinical trial, which the Examiner is interpreting natural language text to encompass a natural language user request, and interpreting specifying the type of clinical trial that is to be performed to encompass specifying one or more clinical trial parameters as any other information regarding the clinical trial can also be specified);
parsing the natural language user request to generate structured contextual data (See [0075]-[0076]: The backend can parse the user's input, and convert the user's input into a format or data structure that can be further interpreted and/or processed by the backend) comprising:
(i) a database schema definition for a clinical trial protocol database (See [0104]-[0116], [0154]-[0155]: At least some of the ingested information can be stored in a vector database for future retrieval and processing, and Normalizing: flatting the schema of the data to a row-based format, which the Examiner is interpreting a vector database to encompass a clinical trial protocol database, and the decoder layer performs the functional opposite, by taking all the encodings and using their incorporated contextual information to generate an output sequence, which the Examiner is interpreting the contextual information to encompass a database schema definition);
(ii) domain knowledge definitions comprising an electronic data capture query definition (See [0111]-[0116], [0145]: (i) Querying: retrieving structure data (e.g., via API and/or SQL queries, and (iv) Generating metadata: adding metadata to the summary (e.g., metadata indicating a data type, source query, etc.), which the Examiner is interpreting source query to encompass an electronic data capture query definition as the Applicant’s Specification defines an EDC query definition as “The contextual data may also include domain knowledge definitions. These definitions could involve various elements, such as an electronic data capture query that specifies the manner of querying data stored in an EDC system, such as Medidata Rave EDC.” in Paragraph [0051]);
and (iii) an output format definition specifying a database query language and an associated metadata structure (See [0111]-[0121], [0128]: Determining vector similar and/or Best Matching 25 (BM25) with respect to Term Frequency-Inverse Document Frequency (TF-IDF), which the Examiner is interpreting the “fast retrieval” to encompass an output format definition);
combining the structured contextual data with the natural language user request to produce a prompt for a large language model (See [0134]-[0139]: The input for the tool is a user question and context information as a long string, which should state some design parameters of a clinical trial, which the Examiner is interpreting context information to encompass contextual data ([0154]-[0155]), and the input to encompass a prompt for a large language model ([0134]));
inputting the prompt to the large language model to produce a model response (See Paragraphs [0187]-[0191]: The system causes the one or more LLMs to perform each of the actions using the one or more content generation tools associated with that action and based on the first data) comprising:
(i) a database query (See [0111]-[0116]: The system also ingests at least some of the structured data obtained during the data collection process, retrieving structure data (e.g., via API and/or SQL queries), and generating metadata, adding metadata to the summary (e.g., metadata indicating a data type, source query, etc.));
(ii) metadata including extracted trial parameters (See [0178]: Accessing the first data can include querying the structured data, normalizing the structured data, augmenting the structured data using the one or more LLMs, adding metadata to the structured data, and/or ingesting the structured data into a vector database, which the examiner is interpreting adding metadata to the structured data to encompass metadata including extracted trial parameters);
executing the database query against at least a first clinical trial protocol database to retrieve matching protocol data (See [0093], [0118]-[0121], [0135]: The LLM can also determine a particular order or sequence in which the selected actions and/or tools can be executed to achieve the specified result or output, and the engine can also obtain data via a “fast retrieval” search of data, such as structured or unstructured data in a database, which the Examiner is interpreting an input search query to encompass the database query, and interpreting Best Matching 25 to encompass a first clinical trial protocol database to retrieve matching protocol data);
generating one or more application programming interface (API) requests based at least in part on one or more of: the retrieved protocol data and the metadata (See [0075], [0099]: Documents can be collected and searched for using Structured Query Language (SQL) and/or using an API, which the Examiner is interpreting to encompass the claimed portion as the backend receives the user's input via an application programming interface (API));
performing one or more API calls using the generated API requests to obtain one or more clinical trial metrics (See [0145], [0148]: SQL tool is used to query a clinical trial protocol database (e.g., an extended version of the ClinicalTrials.gov database), the database can include details on study design, eligibility criteria, interventions, study locations, and contact information for study investigators, which the Examiner is interpreting the details to encompass one or more clinical trial metrics);
generating a response to the natural language user request based at least in part on the retrieved clinical trial protocol data and the one or more clinical trial metrics (See [0145]-[0148], [0218]: A computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser, which the Examiner is interpreting in response to requests to encompass a response to the natural language user request ([0145]-[0148])); and
outputting the response via the user interface (See [0218]: To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device.)
Claim 18 includes the additional limitation of “a computer having one or more processors in communication with a memory, the memory storing instructions executable by said one or more processors to perform:” which Heyder encompasses as Heyder discloses in [0047] a “computer system”, in [0199] “one or more processors”, and in [0199] “a memory”.
Claim 19 includes the additional limitation of “when executed by one or more processors of a computer, cause said one or more processors to perform a method” which Heyder encompasses as Heyder discloses in [0047] a “computer system”, in [0199] “one or more processors”, and in [0199] “a memory”.
As per claim 7, Heyder discloses the method of claim 1 as described above. Heyder further teaches wherein the first clinical trial protocol database stores historical clinical trial protocol data and, in said executing of the database query against the first clinical trial protocol database, the first clinical trial protocol database is accessed via an API call to a publicly available uniform resource locator (URL) (See [0136]: Based on an input search query, this tool can be used to search in multiple sources for relevant information (e.g., web-search, literature-search, clinical trials database, clinicaltrials.gov API, etc.), which the Examiner is interpreting clinical trials database to encompass the first clinical trial protocol database stores historical clinical trial protocol data and interpreting the clinicaltrials.gov API to encompass an API call to a publicly available uniform resource locator (URL).)
Claim(s) 20 and 22 mirrors claim 7 only within different statutory categories, and are rejected for the same reason as claim 7.
As per claim 8, Heyder discloses the method of claims 1 and 7 as described above. Heyder further teaches wherein said executing of the database query against the first clinical trial protocol database further comprises: retrieving additional clinical trial protocol data from a second, proprietary database (See [0136]: The combined search tools is used for searching data sources for scientific information regarding clinical trials, which the Examiner is interpreting data sources to encompass a second, proprietary database (clinical trials database)); and
correlating the clinical trial protocol data retrieved from the first clinical trial protocol database and the second, proprietary database using respective national clinical trial (NCT) numbers (See [0065], [0136]: The one or more rules can specify that the training data be provided to the generative AI module for training or prompting (e.g., such that the generative AI module can identify trends and/or correlations between the contents of clinical trial documents and the subject matter therein, and generate new output based on those identified trends and/or correlations), which the Examiner is interpreting correlations between the contents of clinical trial documents to encompass correlating the clinical trial protocol data retrieved from the first clinical trial protocol database and the second, proprietary database using respective national clinical trial (NCT) numbers as the combined search tool can search in multiple sources for relevant information (e.g., web-search, literature-search, clinical trials database, clinicaltrials.gov API, etc.) and clinicaltrials.gov uses NCT numbers.)
Claim(s) 21 and 23 mirrors claim 8 only within different statutory categories, and are rejected for the same reason as claim 8.
As per claim 10, Heyder discloses the method of claim 1 as described above. Heyder further teaches further comprising: parsing the model response to extract the database query (See [0094]: The system extracts the most relevant point(s) from the question context that will define the answer and/or produces the result of the action, which the Examiner is interpreting extracts the most relevant point(s) from the question context to encompass extract the database query); and
executing the database query, in said retrieving of clinical trial protocol data, against said at least first database (See [0111]-[0116]: The system also ingests at least some of the structured data obtained during the data collection process, querying: retrieving structure data (e.g., via API and/or SQL queries), and generating metadata, adding metadata to the summary (e.g., metadata indicating a data type, source query, etc.)); and
replacing at least a portion of the metadata with the retrieved clinical trial protocol data to produce an augmented model response (See [0110]-[0117]: At least some of the ingested information can be stored in a vector database for future retrieval and processing, which the Examiner is interpreting at least some of the ingested information can be stored in a vector database to encompass replacing at least a portion of the metadata with the retrieved clinical trial protocol data to produce an augmented model response ([0114]).)
As per claim 11, Heyder discloses the method of claims 1 and 10 as described above. Heyder further teaches further comprising: parsing the augmented model response to extract the metadata (See [0118]-[0119]: The engine can also obtain data via a “fast retrieval” search of data, such as structured or unstructured data in a database, a fast retrieval search can be performed based by filtering data using a keywords and/or metadata filter based on data flow, which the Examiner is interpreting metadata filter based on data flow to encompass extract the metadata); and
comparing, in said generating one or more API requests, variables of the extracted metadata to variables of the API (See [0101]-[0102], [0139]-[0151]: The generative AI module differentially weighs the significance of each part of an input (which includes the recursive output) data, and uses one or more attention mechanism to provide context for any position in the input sequence, which the Examiner is interpreting differentially weighs the significance of each part of an input to encompass comparing, variables of the extracted metadata to variables of the API as the metadata regarding the tile can be identified ([0101]).)
As per claim 16, Heyder discloses the method of claim 1 as described above. Heyder further teaches further comprising performing fine tuning of the large language model using a curated dataset which is continuously updated (See [0049], [0065]: The processing rules can include one or more rules for implementing, instruction tuning or prompting, and operating the generative AI module to produce the output data, which the Examiner is interpreting one or more rules for implementing, instruction tuning to encompass performing fine tuning of the large language mode, and the engine can sue the generative AI module to generate and update the summaries during the generation of the documents (e.g., in real time) in order to provide the user with up to date information regarding the progress of the document generation process to encompass a curated dataset which is continuously updated.)
As per claim 17, Heyder discloses the method of claim 1 as described above. Heyder further teaches further comprising: receiving, after said outputting, a user rating of the response via the user interface (See [0072]: A user can interact with the application to input data and/or commands to the engine, and review data generated by the engine, which the Examiner is interpreting review data generated by the engine to encompass a user rating of the response); and
performing training of the large language model based at least in part on the user rating (See [0154]-[0162]: An encoder includes one or more encoding layers that process the input iteratively one layer after another, while the decoder includes one or more decoding layers that perform a similar operation with respect to the encoder's output, and the attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations, which the Examiner is interpreting the input iteratively one layer after another and training to encompass performing training of the large language model based at least in part on the user rating.)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3, 9 are rejected under 35 U.S.C. 103 as being unpatentable over Heyder et al. (U.S. Patent Pre-Grant Publication No. 2025/0239334) in view of Walpole et al. (U.S. Patent Pre-Grant Publication No. 2019/0206521).
As per claim 3, Heyder discloses the method of claim 1 as described above. Heyder may not explicitly teach wherein, in said parsing the natural language user request to generate the structured contextual data, the domain knowledge definitions, include a patient burden index definition.
Walpole teaches a method wherein, in said parsing the natural language user request to generate the structured contextual data, the domain knowledge definitions, include a patient burden index definition (See [0039]-[0041]: The patient burden index may be the result of a mathematical function that includes the results of machine learning applied to data associating subject-related factors and self-reported or imputed levels of burden, which the Examiner is interpreting the patient burden index to encompass patient burden index definition when combined with Heyder.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Heyder to include parsing the natural language user request to generate the structured contextual data, the domain knowledge definitions, include a patient burden index definition as taught by Walpole. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Heyder with Walpole with the motivation of improving the design and execution of research protocols (See Detailed Description of Walpole in Paragraph [0023]).
As per claim 9, Heyder discloses the method of claim 1 as described above. Heyder may not explicitly teach wherein, in said performing said one or more API calls using the generated API requests, said one or more API calls are made to one or more of the following: a screening failure predictor, a budget calculator, and a patient burden index calculator.
Walpole teaches a method wherein, in said performing said one or more API calls using the generated API requests, said one or more API calls are made to one or more of the following: a screening failure predictor, a budget calculator, and a patient burden index calculator (See [0045]: Calculating the patient burden index is accomplished using machine learning methods based on the available, which the Examiner is interpreting calculating the patient burden index to encompass a patient burden index calculator.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Heyder to include said one or more API calls are made to one or more of the following: a screening failure predictor, a budget calculator, and a patient burden index calculator as taught by Walpole. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Heyder with Walpole with the motivation of improving the design and execution of research protocols (See Detailed Description of Walpole in Paragraph [0023]).
Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Heyder et al. (U.S. Patent Pre-Grant Publication No. 2025/0239334) in view of Weston et al. (U.S. Patent Pre-Grant Publication No. 2025/0132036).
As per claim 12, Heyder discloses the method of claim 1 as described above. Heyder may not explicitly teach wherein said inputting the prompt to the large language model to produce the model response comprising the database query and metadata uses a zero-shot learning training process.
Weston teaches a method wherein said inputting the prompt to the large language model to produce the model response comprising the database query and metadata uses a zero-shot learning training process (See [0181]: The conditioning utilized by the methods above can invoke so-called “zero” or “few-shot transfer learning” of large language models.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Heyder to include inputting the prompt to the large language model to produce the model response comprising the database query and metadata uses a zero-shot learning training process as taught by Weston. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Heyder with Weston with the motivation of improving provide more effective review, rating and administration methods for clinical assessments (See Background of Weston in Paragraph [0006]).
As per claim 13, Heyder discloses the method of claim 1 and Heyder/Weston discloses the method of claim 12 as described above. Heyder further teaches further comprising iteratively refining the prompt to improve performance of the large language model (See Paragraph [0154]: The generative AI module can also include an encoder, and an encoder includes one or more encoding layers that process the input iteratively one layer after another, while the decoder includes one or more decoding layers that perform a similar operation with respect to the encoder's output, which the Examiner is interpreting the encoder to encompass iteratively refining the prompt.)
As per claim 14, Heyder discloses the method of claim 1 as described above. Heyder may not explicitly teach wherein said inputting the prompt to the large language model to produce model response comprising the database query and metadata uses a few-shot learning training process.
Weston teaches a method wherein said inputting the prompt to the large language model to produce model response comprising the database query and metadata uses a few-shot learning training process (See [0181]: The conditioning utilized by the methods above can invoke so-called “zero” or “few-shot transfer learning” of large language models.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to modify the method of Heyder to include inputting the prompt to the large language model to produce model response comprising the database query and metadata uses a few-shot learning training process as taught by Weston. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Heyder with Weston with the motivation of improving provide more effective review, rating and administration methods for clinical assessments (See Background of Weston in Paragraph [0006]).
As per claim 15, Heyder discloses the method of claim 1 and Heyder/Weston discloses the method of claim 14 as described above. Heyder further teaches further comprising inputting a small number of manually-labeled examples to train the large language model (See [0057]: The database module can store training data, the training data can include example clinical trial documents, such as those previously generated by the engine and/or those manually produced by one or more human users, which the Examiner is interpreting manually produced by one or more human users to encompass a small number of manually-labeled examples to train the large language model.)
Response to Arguments
In the Remarks filed on December, 24, 2025, the Applicant argues that the newly amended and/or added claims overcome the Claim Objection(s), 35 U.S.C. 101 rejection(s), 35 U.S.C. 102 rejection(s), and 35 U.S.C. 103 rejection(s). The Applicant notified the Examiner on January 6, 2025 that the Claim Objection(s) have been addressed in the amended claims, but the Applicant’s Remarks did not reflect that the Claim Objection(s) had been addressed. The Examiner acknowledges that the newly added and/or amended claims overcome the previous Claim Objection(s) and 35 U.S.C. 101 rejection(s). However, the Examiner does not acknowledge that the newly added and/or amended claims overcome the newly added Claim Objection(s), 35 U.S.C. 102 rejection(s), and 35 U.S.C. 103 rejection(s).
The Applicant argues that:
(1) claim 1 features parsing a natural language user request to generate structured contextual data comprising a database schema definition for a clinical trial protocol database, domain knowledge definitions comprising an electronic data capture (EDC) query definition, and an output format definition specifying a database query language and an associated metadata structure. The Examiner cites Heyder Paragraph [0213], in the context of dependent claim 2 (cancelled herein), as disclosing "the contextual data comprises a schema for the first database," interpreting Heyder's description of an "index database" with multiple collections of data as such a schema. However, Paragraph [0213] merely states that the index database may include multiple collections of data organized and accessed differently. This is a generic description of data organization and does not teach generating a database schema definition as part of structured contextual data from a natural language request for the purpose of controlling large language model output. The claimed database schema definition is an explicit, structured representation of a clinical trial protocol database - an aspect which is also absent from Heyder. As acknowledged at p. 19 of the Office Action, Heyder disclose not contextual data comprising domain knowledge definitions, including an electronic data capture query definition, i.e., a set of domain-specific rules or parameters for querying an EDC system. The Examiner cites Heyder Paragraph [0144], in the context of dependent claim 4, as disclosing "the contextual data comprises an output format definition," interpreting the output of JSX code in markdown format as such a definition. However, Paragraph [0144] merely describes outputting a string containing JSX code and markdown for rendering charts or graphs in a GUI. This is solely a display formatting mechanism, not an output format definition that specifies both a database query language (such as ANSI SQL) and a structured metadata arrangement for downstream processing. The claimed output format definition is part of the structured contextual data that constrains large language model output, which is not taught by Heyder. In view of the above, it is deemed that Heyder does not disclose "parsing the natural language user request to generate structured contextual data comprising: (i) a database schema definition for a clinical trial protocol database; (ii) domain knowledge definitions comprising an electronic data capture query definition; and (iii) an output format definition specifying a database query language and an associated metadata structure," as recited in amended claim 1. Accordingly, claim 1 is not anticipated by Heyder;
(2) Walpole was cited in the context of dependent claims 3 and 9. Claim 3 recites that the domain knowledge definitions further comprise a patient burden index definition. Claim 9 recites that the API requests are made to one or more of a screening failure predictor, a budget calculator, and a patient burden index calculator. Nothing has been found in Walpole that would remedy the deficiencies of Heyder discussed above with respect to the features of amended claim 1. Weston was cited in the context of dependent claims 12-15. Claim 12 recites that the inputting of the prompt to the large language model uses a zero-shot learning training process. Claim 13, which depends from claim 12, recites iteratively refining the prompt to improve the performance of the model. Claim 14 recites that the prompt input uses a few-shot learning training process. Claim 15 recites inputting a small number of manually-labeled examples to train the model. Nothing has been found in Weston that would remedy the deficiencies of Heyder and Walpole discussed above with respect to the features of amended claim 1. In view of the above, amended independent claim 1 is deemed to be patentable over any permissible combination of Heyder, Walpole, and Weston. Amended independent claims 18 and 19 recite patentable features discussed above with respect to amended claim 1. Accordingly, claims 18 and 19 are deemed to be patentable over the cited prior art for reasons discussed above. The dependent claims distinguish the invention over the applied prior art for reasons discussed above in regard to their corresponding independent claims as well as on their own merits.
In response to argument (1), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that Heyder teaches “parsing the natural language user request to generate structured contextual data” in Paragraphs [0075]-[0076], “(i) a database schema definition for a clinical trial protocol database” in Paragraphs [0116], [0154]-[0155], “(ii) domain knowledge definitions comprising an electronic data capture query definition” in Paragraphs [0111]-[0116], [0145], and “(iii) an output format definition specifying a database query language and an associated metadata structure” in Paragraphs [0111]-[0121], [0128] as described above in the 35 U.S.C. 102 rejection(s). The Examiner has not relied on Paragraphs [0144], [0213] to address the newly amended independent claim 1. The 35 U.S.C. 102 rejection(s) stand.
In response to argument (2), the Examiner does not find the Applicant’s argument(s) persuasive. The Examiner maintains that the 35 U.S.C. 103 rejection(s) are maintained as the Examiner has relied upon Heyder to teach the newly amended independent claims 1, 18-19 as described above in the response to argument (1) section. The 35 U.S.C. 103 rejection(s) stand.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bennett S Erickson whose telephone number is (571)270-3690. The examiner can normally be reached Monday - Friday: 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached at (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Bennett Stephen Erickson/Primary Examiner, Art Unit 3683