DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
All previous objections and rejections directed to the Applicant’s disclosure and claims not discussed in this Office Action have been withdrawn by the Examiner.
Response to Amendments and Arguments
The 101 rejections are maintained. The applicant has amended the claims to include “the AI model comprising a generative pretrained transformer”, noting that claims are directed to an invention and practical application. However, the examiner notes that question and prompt to/from a generative transformer, such as ChatGPT or Gemini, are now well-known and conventional. The applicant’s specification includes several references to GPT model. In order to overcome the 101 rejections, the applicant would need to incorporate limitations to the claim language which explain how the prompt is unique/special.
The applicant’s arguments and amendments have been carefully considered, but are moot in view of new grounds for rejection. The amendment which necessitates a new reference (Jain et al.) is “the AI model comprising a generative pretrained transformer”.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite steps for a systematic literature review.
The limitations of claims 1-20 for systematic literature review, as drafted, describe a method that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. The limitations of claims 1-20 for systematic literature review, as drafted, are a computer program product or apparatus that, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of generic computer components. That is, other than “instructions” “computer”, “processor”, and “memory” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the computer hardware language, the claim encompasses steps than may be performed manually by the user. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, claims 1-20 only recite the additional elements “computer readable storage medium”, “instructions” “computer”, “processor”, and “memory” to perform the aforementioned steps. The processor and other hardware are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function for computer-based systematic literature review such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional hardware elements to perform both the aforementioned steps amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claims are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20240004910, hereinafter referred to as Jafar et al., in view of US 20240256582, hereinafter referred to as Jain et al.
Regarding claim 1 (Currently amended), Jafar et al. discloses a computer-implemented method (Jafar et al., para [0004] – computer-implemented method for automated systematic literature review) comprising:
identifying a record associated with a systematic literature review (“The method 800 may further include the step of obtaining data for a first publication of a study from a first database 804. In some cases, the database may be a file or a local or remote storage location in which data from a literature search is stored...The data from the search may include various data such as complete articles, links to articles, and partial data from the articles (such as title and/or abstract). The data returned from the search may be stored in a local database or any appropriate electronic storage medium,” Jafar et al., para [0084].);
generating, with a processing device, a normalized set of sections included in the record (“Each item may include one or more data elements. In the case of publications such as case studies, the data elements may include text such as the title 206, abstract 208, and other data 210 related to the publication. In some cases, the other data may include the type of publication, the source of the publication, the date of publication, the authors, the full text of the publication, and the like. The search result data may be received as a spreadsheet file, database, XML file, or any other suitable data format,” Jafar et al., para [0056]. Here, the sections of the record (i.e., search result data) are standardized (i.e., normalized) to a particular format.);
determining, with the processing device, a question pertaining to the systematic literature review (Jafar et al., fig. 1(102) and fig 8(806). Also, “the steps of SLR may include first formulating a research question 102,” Jafar et al., para [0052].);
determining, with the processing device, an indication of at least one section of the normalized set of sections to evaluate in responding to the question (fig 8(808). Also, “Once the research question is established, a systematic search strategy is developed and executed 104 to find all the available literature on the topic, typically involving multiple databases, and may also include other sources, like the reference lists of identified articles, relevant journals, and conference proceedings 112. The identified studies may then be screened based on predefined criteria. This process typically involves two stages—initial screening based on titles and abstracts 106, and full-text screening for those that pass the initial stage 108. Each study that passes the screening stage may then be appraised for selected criteria 110, followed by additional steps that include data extraction from each study, data synthesis of the gathered data,” Jafar et al., para [0052]. And, “The input generator 212 may generate inputs to a model utilizing a question formulation 502 module. The question formulation module 502 may concatenate criteria and data from the search item (i.e. the title and abstract), and reformat the data into a question. For example, for input items [abstract], [title], [include criteria], [include keywords], [exclude criteria], and [exclude keywords] (where each [item] is representative of content of “item”) may be formatted as a question 508 by the input generator 212. The question may take the form of “Should [abstract] and [title] be accepted based on [include criteria], and [include keywords] or excluded based on [exclude criteria] and [exclude keywords]?”,” Jafar et al., para [0065].);
determining, with the processing device, a role for responding to the question (“wherein the trained language model is fine-tuned on a question-and-answer task,” Jafar et al., para [0004]. Also, “The training corpus 702 may include labeled data that may include previously prescreened search data wherein the prescreening was performed by human reviewers,” Jafar et al., para [0075]. Here, a role for responding to the question is determined by the prescreened search data.);
transforming, with the processing device, the question, content of the at least one section of the normalized set of sections, and the role for responding to the question into a prompt for an artificial intelligence (AI) model, the AI model comprising a g generative pretrained transformer, and the prompt configured to cause the AI model to assume the role and generate a response to the question as output based on the content of the at least one section of the normalized set of sections (“The input generator 212 may generate inputs to a model utilizing a question formulation 502 module. The question formulation module 502 may concatenate criteria and data from the search item (i.e. the title and abstract), and reformat the data into a question. For example, for input items [abstract], [title], [include criteria], [include keywords], [exclude criteria], and [exclude keywords] (where each [item] is representative of content of “item”) may be formatted as a question 508 by the input generator 212. The question may take the form of “Should [abstract] and [title] be accepted based on [include criteria], and [include keywords] or excluded based on [exclude criteria] and [exclude keywords]?”,” Jafar et al., para [0065]. Also, “The trained language model may be fine-tuned on a question-and-answer task,” Jafar et al., para [0089]. The prompt is configured to cause the Al model to assume the role. Also, “a question may be formulated based on the set of inclusion criteria, the set of exclusion criteria, and the data for the first publication,” Jafar et al., para [0097]. And, “one probability output may be generated for each input question. In the case where a set of input questions is provided to a model, the model may output a set of probabilities where each probability corresponds to an input question. A probability output generated by the model may be indicative of an answer to the question provided to the model,” Jafar et al., para [0090].);
providing, via an interface, the prompt to the AI model (“generating an input to a trained language model, wherein each input includes the question,” Jafar et al., para [0004]. A prompt is an input to the model which encompasses the question.); and
updating, with the processing device, a record file stored in computer memory to include at least a portion of the output of the AI model as an answer to the question that is associated with the record (“the system 200 may further include a presentation 220 module configured to provide an output of the prescreening results. In embodiments, the output of the prescreening results may be generated by a reporting module 222. The reporting module 222 may generate a file (such as a table, or a spreadsheet), a GUI display, and/or other output that can be viewed by a user or used by another automated process such as another process in SLR,” Jafar et al., para [0069]. And, “marking the first publication for selection based on the selection score 814. The marking may indicate if the publication should be selected or not selected based on the selection score. The marking may be stored in association with data for each publication, such as in a table or a database. In embodiments, the marking may indicate the reasons for non-selection,” Jafar et al., para [0093]. See also Jafar et al., fig. 6.).
Jafar et al., though, does not disclose the AI model comprising a generative pretrained transformer.
Jain et al. is cited to disclose the AI model comprising a g generative pretrained transformer (“Systems and methods for applying generative artificial intelligence (AI) techniques to automatically generate and display summaries of search results are provided. In some cases, a search engine or search system that leverages a generative AI model may generate a set of search results for a given search query and provide the set of search results as part of an input prompt to the generative AI model to generate a summary response of the set of search results. The generative AI model may comprise a Generative Pre-trained Transformer (GPT) model or other generative AI model. The summary response may comprise, for example, a natural language text response. The set of search results may comprise text from electronic documents and messages and/or portions thereof,” Jain et al., para [0003]. See also figs. 4A-C.). Jain et al. benefits Jafar et al. by providing reduced energy consumption and cost of computing resources, reduced search system downtime, increased quality of search results, increased reliability of information provided to search users, and improved search system performance (Jain et al., para [0004]). Therefore, it would be obvious for one skilled in the art to combine the teachings of Jafar et al. with those of Jain et al. to improve the search performance for the systematic literature review as described by Jafar et al.
As to claim 9, method claim 1 and apparatus claim 9 are related as apparatus and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 9 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
As to claim 17, method claim 1 and CRM claim 17 are related as CRM and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 17 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
Regarding claim 2 (Original), Jafar et al., as modified by Jain et al., discloses the computer-implemented method of claim 1, further comprising:
evaluating at least one of a framework of the record, interpretations in the record, conclusions in the record, assumptions in the record, and a source of the record to perform a qualitative assessment of the record (“The criteria data 214 may include data such as inclusion and/or exclusion criteria for evaluating the data items. In one example, elements of the criteria data 214 may be appended or combined with the elements of each data item to generate an input to the model 216. The model 216 may process the input from the input generator 212 and provide an output. In embodiments, the output may include one or more item scores 218,” Jafar et al., para [0057].);
determining a quality score of the record based on the qualitative assessment (“The model 216 may be configured and/or trained such that one or more item scores 218 provide an indication if the search result item (i.e. title and abstract of a publication) meets the criteria 214 for selection. Based on the item scores 218, the item (i.e. a study publication) may be rejected and marked for rejection during the pre-filtering process,” Jafar et al., para [0057].); and
updating the record file stored in the computer memory to include the quality score (“one or more item scores 218 may be a numerical value between 0 and 1, and a threshold value for a score may be used for determining if an item should be rejected during the prescreening process,” Jafar et al., para [0057].).
As to claim 10, method claim 2 and apparatus claim 10 are related as apparatus and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 10 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
As to claim 18, method claim 2 and CRM claim 18 are related as CRM and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 18 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
Regarding claim 3 (Original), Jafar et al., as modified by Jain et al., discloses the computer-implemented method of claim 2, further comprising:
identifying a risk of bias associated with the record based on the qualitative assessment of the record (“to reduce bias and error, screening is often performed by more than one reviewer independently,” Jafar et al., para [0053].); and
determining the quality score of the record based on the risk of bias (“one or more item scores 218 may be a numerical value between 0 and 1, and a threshold value for a score may be used for determining if an item should be rejected during the prescreening process,” Jafar et al., para [0057].).
As to claim 11, method claim 3 and apparatus claim 11 are related as apparatus and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 11 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
As to claim 19, method claim 3 and CRM claim 19 are related as CRM and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 19 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
Regarding claim 4 (Original), Jafar et al., as modified by Jain et al., discloses the computer-implemented method of claim 1, further comprising:
performing a level of evidence classification to determine a trustworthiness of the record (“The methods and systems described herein can provide fast and reliable prescreening. In one example, validation of the methods and systems described herein showed an accuracy=0.93, precision=0.93, recall=0.93, F1-score=0.93, and AUC=0.93 when compared to the results generated by two independent reviewers and a third verifier,” Jafar et al., para [0104].); and
updating the record file stored in the computer memory to include an indication of the trustworthiness of the record (“In one example, validation of the methods and systems described herein showed an accuracy=0.93,” Jafar et al., para [0104].).
As to claim 12, method claim 4 and apparatus claim 12 are related as apparatus and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 12 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
As to claim 20, method claim 4 and CRM claim 20 are related as CRM and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 20 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
Regarding claim 5, Jafar et al., as modified by Jain et al., discloses the computer-implemented method of claim 1, further comprising determining an aim of the systematic literature review based on input provided via a graphical user interface (“The PICOS framework serves as a systematic approach for defining the parameters of a research question and crafting an effective literature search strategy for systematic reviews. PICOS is an acronym where ‘P’ denotes Patient, Problem, or Population; ‘I’ stands for Intervention; ‘C’ signifies Comparison; ‘O’ represents Outcome; and ‘S’ indicates Study Type,” Jafar et al., para [0061]. And, “the number of questions and what data is included in each question may depend on the constraints on the inputs of a model,” Jafar et al., para [0086]. See also Jafar et al., fig. 6.).
As to claim 13, method claim 5 and apparatus claim 13 are related as apparatus and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 13 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
Regarding claim 6 (Original), Jafar et al., as modified by Jain et al., discloses the computer-implemented method of claim 5, wherein determining the question pertaining to the systematic literature review is based on the aim of the systematic literature review (“Once the research question is established, a systematic search strategy is developed and executed 104 to find all the available literature on the topic,” Jafar et al., para [0052]. Also, “The PICOS framework serves as a systematic approach for defining the parameters of a research question and crafting an effective literature search strategy for systematic reviews,” Jafar et al., para [0061]. And, “steps of method 800 may be repeated for all publications or items identified in a search. Items in search results may be processed sequentially with one model, or in parallel using multiple instances of a model to generate a list of preprocessed search results. The preprocessed search results may be used by elements of an SLR system to review the selected references,” Jafar et al., para [0094].).
As to claim 14, method claim 6 and apparatus claim 14 are related as apparatus and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 14 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
Regarding claim 7 (Original), Jafar et al., as modified by Jain et al., discloses the computer-implemented method of claim 5, wherein determining the role for responding to the question is based on the aim of the systematic literature review (“FIG. 1 is a flowchart of one example SLR process. In one example, the steps of SLR may include first formulating a research question 102. Once the research question is established, a systematic search strategy is developed and executed 104 to find all the available literature on the topic, typically involving multiple databases, and may also include other sources, like the reference lists of identified articles, relevant journals, and conference proceedings 112. The identified studies may then be screened based on predefined criteria,” Jafar et al., para [0052]. Also, “The training corpus 702 may include labeled data that may include previously prescreened search data wherein the prescreening was performed by human reviewers,” Jafar et al., para [0075].).
As to claim 15, method claim 7 and apparatus claim 15 are related as apparatus and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 15 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
Regarding claim 8 (Original), Jafar et al., as modified by Jain et al., discloses the computer-implemented method of claim 7, wherein the role for responding to the question comprises an expert in querying databases to identify data related to the aim of the systematic literature review (“The model is fine-tuned using a labeled corpus of prescreened data to effectively provide a probability output for a question input as outlined herein,” Jafar et al., para [0073]. And, “The training corpus 702 may include labeled data that may include previously prescreened search data wherein the prescreening was performed by human reviewers,” Jafar et al., para [0075].).
As to claim 16, method claim 8 and apparatus claim 16 are related as apparatus and method of using same, with each claimed element’s function corresponding to the method step. Accordingly claim 16 is similarly rejected under the same rationale as applied above with respect to method claim. And, Jafar et al., para [0106], teaches processor, CRM, memory, and instructions.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNE L THOMAS-HOMESCU whose telephone number is (571)272-0899. The examiner can normally be reached Mon-Fri 8-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh M Mehta can be reached on 5712727453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANNE L THOMAS-HOMESCU/Primary Examiner, Art Unit 2656