DETAILED ACTION
Claims 1-20 are pending and have been examined.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to an abstract idea without significantly more.
Here, under step 1 of the Alice analysis, method claims 1-8 are directed to a series of steps, system claims 9-16 are directed to a storage device that stores program instructions; one or more processors operably connected to the storage device and configured to execute the program instructions, and computer program product claims 17-20 are directed to a computer-readable storage medium having program instructions embodied thereon. Thus the claims are directed to a process, machine, and manufacture, respectively.
Under step 2A Prong One of the analysis, the claimed invention is directed to an abstract idea without significantly more. The claims recite airline evaluation, comment recommendation, and finding prediction mapping, including providing and receiving steps.
The limitations of providing and receiving, are a process that, under its broadest reasonable interpretation, covers organizing human activity concepts, but for the recitation of generic computer components.
Specifically, the claim elements recite responsive to a first user partial entry in an evaluation category entry field, providing a first number of prompts of a suggested evaluation category based on a number of predefined evaluation categories; receiving selection of one of the first number of prompts or user entry of an alternative evaluation category; responsive to a second user partial entry in a task description entry field, providing a second number of prompts of a suggested full task description based on historical contents related to the evaluation category; receiving selection of one of the second number of prompts or user input of an alternative full task description; responsive to a third user partial entry in a comments entry field, providing third prompts comprising initial subsets of words of suggested comments based on historical contents related to the task description; receiving selection of one of the third prompts or user input of an alternate comment; providing a number of predicted findings based on the evaluation category, task description, and comment; and receiving selection of one of the predicted findings or user input of an alternative finding.
That is, other than reciting a number of processors and a first and second large language model, the claim limitations merely cover commercial interactions, including business relations, thus falling within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Under Step 2A Prong Two, the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This judicial exception is not integrated into a practical application. The claims include a number of processors and a first and second large language model. The number of processors and a first and second large language model in the steps is recited at a high-level of generality, such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As a result, the claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a number of processors and a first and second large language model amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
None of the dependent claims recite additional limitations that are sufficient to amount to significantly more than the abstract idea. Claims 2-4 further describe the predefined evaluation categories and the first large language model. Claims 5-8 further describe the second large language model, user entries and input, and the initial subset of words of a suggested comment. Similarly, dependent claims 10-16 and 18-20 recite additional details that further restrict/define the abstract idea. A more detailed abstract idea remains an abstract idea.
Under step 2B of the analysis, the claims include, inter alia, a number of processors and a first and second large language model.
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
There isn’t any improvement to another technology or technical field, or the functioning of the computer itself. Moreover, individually, there are not any meaningful limitations beyond generally linking the abstract idea to a particular technological environment, i.e., implementation via a computer system. Further, taken as a combination, the limitations add nothing more than what is present when the limitations are considered individually. There is no indication that the combination provides any effect regarding the functioning of the computer or any improvement to another technology.
In addition, as discussed in paragraph 0027 of the specification, “As depicted, computer system 150 includes a number of processor units 152 that are capable of executing program code 154 implementing processes in the illustrative examples. As used herein, a processor unit in the number of processor units 152 is a hardware device and is comprised of hardware circuits such as those on an integrated circuit that respond and process instructions and program code that operate a computer. When a number of processor units 152 execute program code 154 for a process, the number of processor units 152 is one or more processor units that can be on the same computer or on different computers.”
As such, this disclosure supports the finding that no more than a general purpose computer, performing generic computer functions, is required by the claims.
Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. See Alice Corporation Pty. Ltd. v. CLS Bank Int’l et al., No. 13-298 (U.S. June 19, 2014).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hawes et al (US 20240403634 A1), in view of Roper, Jr. et al (US 20260017451 A1).
As per claim 1, Hawes et al disclose a computer-implemented method, the method comprising: using a number of processors to perform (i.e., FIG. 11B is a flowchart illustrating an example process 1100B for interacting with an LLM. This process, in full or parts, can be executed by one or more hardware processors, whether in association with a singular or multiple computing devices like user device 1150, AIS 1102, data processing services 1120, LLM 1130, ¶ 0200):
responsive to a second user partial entry in a task description entry field, providing, by the first large language model, a second number of prompts of a suggested full task description based on historical contents related to the evaluation category; receiving selection of one of the second number of prompts or user input of an alternative full task description (i.e., a function may utilize function inputs to generate a prompt based on the task of “scheduling maintenance for the oldest piece of equipment” that includes information from data inputs associated with scheduling maintenance, equipment information, and/or other data needed to perform the task or identify further tasks. Upon executing a function, the system may receive responses from LLMs based on the prompt, ¶0158);
responsive to a third user partial entry in a comments entry field, providing, by the first large language model, third prompts comprising initial subsets of words of suggested comments based on historical contents related to the task description; receiving selection of one of the third prompts or user input of an alternate comment (i.e., Upon executing a function, the system may receive responses from LLMs based on the prompt. The responses from the LLMs may contain a result for the task and/or another step or subtask for the function to use to generate another prompt for the LLMs. For example, the responses from an LLM may contain results for the task, such as information that allows the application to schedule maintenance for a piece of equipment, or a request for more information, ¶ 0158).
Hawes et al does not disclose airline evaluation, comment recommendation, and finding prediction mapping, responsive to a first user partial entry in an evaluation category entry field, providing, by a first large language model, a first number of prompts of a suggested evaluation category based on a number of predefined evaluation categories; receiving selection of one of the first number of prompts or user entry of an alternative evaluation category; and providing, by a second large language model, a number of predicted findings based on the evaluation category, task description, and comment; and receiving selection of one of the predicted findings or user input of an alternative finding.
Roper, Jr. et al disclose a product (e.g., airplane, spacecraft, exploration rover, missile system, automobile, rail system, marine vehicle, remotely operated underwater vehicle, robot, drone, medical device, biomedical device, pharmaceutical compound, drug, power generation system, smart grid metering and management system, microprocessor, integrated circuit, building, bridge, tunnel, chemical plants, oil and gas pipeline, refinery, etc.) manufacturer may use IDEP platform 100 to develop a new product (¶ 0162). Multiple digital twins may be created to evaluate the future performance of the physical assembly or its component parts based on specific performance metrics. Simulations are conducted with various control policies to assess the impact on performance objectives and costs. The outcome of these simulations helps in deciding which specific control policies should be implemented (e.g., tail volume coefficients and sideslip angle for an airplane product) (¶ 0187).
The digital thread script to update the sharable document splice may further comprise code to send a prompt to a large language model (LLM)-based Artificial Intelligence (AI) module. The prompt may be generated based on a given subunit and the digital artifact (¶ 0042). This integration of AI assistance, especially via Large Language Model (LLMs), expedites the process to describe or summarize a DE model into texts that are easy to read and understand, improves the referencing process, and allows single-click generation of model documentation (¶ 0111).
In various embodiments, document parts or subunits may be classified according to type, structural form, formatting, spacing, syntax, sectioning, content, theme, or any other predefined or user-defined rules. In some embodiments, an LLM may be deployed to understand user-defined rules for splicing a document (¶ 0345).
Computing system 208 may also be configured to communicate with one or more DE tools 202 to send engineering-related inputs for executing analyses, models, simulations, tests, etc. and to receive engineering-related outputs associated with the results (¶ 0204). The results (or “engineering-related data outputs” or “digital artifacts”) of these models, tests, and/or simulations can be transmitted back and received at computing system 208 (¶ 0211). After receiving engineering-related data outputs or digital artifacts from DE tools 202, computing system 208 can then process the received engineering-related data outputs to evaluate whether or not the requirements identified in the common V&V product of interest (e.g., regulatory standard 210E, medical standard 2110G, medical certification regulation 210H, manufacturing standard 210I, manufacturing certification regulation 210J, etc.) are satisfied (¶ 0213).
Hawes et al and Roper, Jr. et al are concerned with efficient large language model input and output prediction generation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include airline evaluation, comment recommendation, and finding prediction mapping, responsive to a first user partial entry in an evaluation category entry field, providing, by a first large language model, a first number of prompts of a suggested evaluation category based on a number of predefined evaluation categories; receiving selection of one of the first number of prompts or user entry of an alternative evaluation category; and providing, by a second large language model, a number of predicted findings based on the evaluation category, task description, and comment; and receiving selection of one of the predicted findings or user input of an alternative finding in Hawes et al, as seen in Roper, Jr. et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 2, Hawes et al does not disclose the predefined evaluation categories include: Flight Operations; Simulator Operations; Training Department; Safety Department; Crew Dispatch Department; and Crew Scheduling Department.
Roper, Jr. et al disclose a product (e.g., airplane, spacecraft, exploration rover, missile system, automobile, rail system, marine vehicle, remotely operated underwater vehicle, robot, drone, medical device, biomedical device, pharmaceutical compound, drug, power generation system, smart grid metering and management system, microprocessor, integrated circuit, building, bridge, tunnel, chemical plants, oil and gas pipeline, refinery, etc.) manufacturer may use IDEP platform 100 to develop a new product (¶ 0162). Multiple digital twins may be created to evaluate the future performance of the physical assembly or its component parts based on specific performance metrics. Simulations are conducted with various control policies to assess the impact on performance objectives and costs. The outcome of these simulations helps in deciding which specific control policies should be implemented (e.g., tail volume coefficients and sideslip angle for an airplane product) (¶ 0187).
Digital threads further enable seamless data exchange and collaboration between departments and stakeholders, ensuring optimized and validated designs (¶ 0163). A user may be a human individual, a company, an organization, an entity, a department within an organization, a representative of an organization and/or person, an artificial user such as algorithms, artificial intelligence, or other software that interfaces, and/or the like (¶ 0550).
Hawes et al and Roper, Jr. et al are concerned with efficient large language model input and output prediction generation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the predefined evaluation categories include: Flight Operations; Simulator Operations; Training Department; Safety Department; Crew Dispatch Department; and Crew Scheduling Department in Hawes et al, as seen in Roper, Jr. et al, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 3, Hawes et al disclose the first large language model comprises a generative pre-trained transformer (GPT) algorithm that uses evaluation categories, task descriptions, and comments to find relationships among words present in a sentence or paragraph and determine significance of words based on position in a sentence to find contextual meaning of a sentence or paragraph (i.e., Examples of models, language models, and/or LLMs that may be used in various implementations of the present disclosure include, Generative Pre-trained Transformer 2 (GPT-2), Generative Pre-trained Transformer 3 (GPT-3), Generative Pre-trained Transformer 4 (GPT-4), ¶ 0060, wherein A language model may generate many combinations of one or more next words (and/or sentences) that are coherent and contextually relevant, ¶ 0056).
As per claim 4, Hawes et al disclose the GPT algorithm is integrated with a web-based frontend application with an interface, wherein the interface provides a tabular format with a number of intelligent search fields that trigger a frontend application programming interface call in response to entry of a number of words (i.e., Examples of models, language models, and/or LLMs that may be used in various implementations of the present disclosure include, Generative Pre-trained Transformer 2 (GPT-2), Generative Pre-trained Transformer 3 (GPT-3), Generative Pre-trained Transformer 4 (GPT-4), ¶ 0060, wherein As described above, in various implementations certain functionality may be accessible by a user through a web-based viewer (such as a web browser), or other suitable software program), ¶ 0296).
As per claim 5, Hawes et al disclose the second large language model comprises a generative pre-trained transformer (GPT) algorithm that uses evaluation categories, task descriptions, and comments from the first large language model or manually entered comments from the user to map contexts of sentences and paragraphs to types of findings (i.e., Examples of models, language models, and/or LLMs that may be used in various implementations of the present disclosure include, Generative Pre-trained Transformer 2 (GPT-2), Generative Pre-trained Transformer 3 (GPT-3), Generative Pre-trained Transformer 4 (GPT-4), ¶ 0060, wherein A language model may calculate the probability of different word combinations based on the patterns learned during training (based on a set of text data from books, articles, websites, audio files, etc.). A language model may generate many combinations of one or more next words (and/or sentences) that are coherent and contextually relevant, ¶ 0056).
As per claim 6, Hawes et al disclose user entries of an alternative evaluation category, alternative full task description, or alternate comment is used to retrain and update the first large language model (i.e., the LLM may be fine-tuned or trained on appropriate training data (e.g., annotated data showing correct or incorrect pairings of sample natural language queries and responses). After receiving a user input, the automatic correction system 102 may generate and provide a prompt to a LLM 130a, which may include one or more large language models trained to fulfill a modeling objective, such as task completion, text generation, summarization, etc., ¶ 0097).
As per claim 7, Hawes et al disclose user input of an alternate finding is used to retrain and update the second large language model (i.e., the LLM may be fine-tuned or trained on appropriate training data (e.g., annotated data showing correct or incorrect pairings of sample natural language queries and responses). After receiving a user input, the automatic correction system 102 may generate and provide a prompt to a LLM 130a, which may include one or more large language models trained to fulfill a modeling objective, such as task completion, text generation, summarization, etc., ¶ 0097).
As per claim 8, Hawes et al disclose the initial subset of words of a suggested comment comprises 3 to 4 words (i.e., LLMs may work by taking an input text and repeatedly predicting the next word or token (e.g., a portion of a word, a combination of one or more words or portions of words, punctuation, and/or any combination of the foregoing and/or the like), ¶ 0057).
Claims 9-16 are rejected based upon the same rationale as the rejection of claims 1-8, respectively, since they are the system claims corresponding to the method claims.
Claims 17-20 are rejected based upon the same rationale as the rejection of claims 1-3 and 5, respectively, since they are the computer program product claims corresponding to the method claims.
Conclusion
The prior art made of record and not relied upon, listed in the PTO-892, considered pertinent to applicant's disclosure, discloses efficient large language model input and output prediction generation.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE D BOYCE whose telephone number is (571)272-6726. The examiner can normally be reached M-F 10a-6:30p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao (Rob) Wu can be reached at (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDRE D BOYCE/Primary Examiner, Art Unit 3623 March 7, 2026