DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1, 9, and 17 are amended. Claims 1-20 are presented for examination.
Response to Arguments
Rejection under 35 U.S.C. 101
Applicant's arguments have been fully considered but they are not persuasive. Applicant argues, “The claimed process requires actual technical interaction with a foundation model system-generating prompts structured to elicit specific completions, receiving model-generated completions, and submitting revised prompts comprising the combined input. These operations cannot be performed mentally because they require interaction with a foundation model to generate completions based on the model's training.” However, the foundation model is merely used as a tool to perform the mental processes recited in the claim. Specifically, MPEP 2106.04(a)(2)(III.)(C.) states, “Claims can recite a mental process even if they are claimed as being performed on a computer.” A user is mentally capable of generating a prompt, editing the prompt, and using a foundation model as a generic computer to input the prompt and receive an output.
Further, applicant argues, “The claimed process also provides measurable technical benefits. The specification explains that ‘[t]o promote timely submission to and reply from the foundation model, the initial prompt generation performed by an application (e.g., by a content assistant of an application) is strategically designed for efficiency.’ Specification, paragraph [0035]. The application generates the initial prompt by selectively including contextual information in a way that balances the size of the initial prompt with the quantity of existing content to be included in the prompt which will allow the foundation model to generate a more useful completion to the input.” However, improvements to a technology cannot be provided by a mental process. Specifically, MPEP 2106.05(a) states, “the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements… the improvement can be provided by the additional element(s) in combination with the recited judicial exception.” The argued improvements to foundation model interaction in applicant’s arguments are provided by a judicial exception (mental process). For example, a person is mentally capable of creating a prompt that includes specific contextual information to elicit a specific response from a foundation model. A person is also mentally capable of inferring a user’s intentions to edit a natural language input and appending additional text to the input to form another prompt. The improvement must be provided by the additional element or additional element in combination with the judicial exception. In this case, the foundation model is an additional element, but is recited at a high level of generality and amounts to no more than a generic computer.
Rejection under 35 U.S.C. 103
Applicant’s arguments regarding claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, the claim recites "(a) generate a first prompt to elicit a reply from a foundation model, wherein the first prompt elicits from the foundation model at least a completion to the natural language input, wherein the first prompt includes at least a portion of the natural language input, a task associated with the natural language input, and context information associated with the document", "(b) receive the reply to the first prompt from the foundation model, wherein the reply comprises the completion to the natural language input, wherein the completion is a suggested continuation of the natural language input that augments the natural language input", "(c) receive user input comprising an indication to combine the natural language input with the completion, resulting in revised natural language input comprising the natural language input appended with the completion as a single input string", and "(d) submit, to the foundation model, a second prompt comprising the revised natural language input." Limitations (a) - (d) recite mental processes that may be practically performed in the mind using pen and paper or a generic computer. For example, limitation (a) can be done by someone generating a prompt that includes a task and context in order to get a specific output from a LLM like ChatGPT. Limitation (b) can be done by someone using a generic computer to receive an output representing a input completion from a LLM. Limitation (c) can be done by someone combining text together to create a new prompt. Limitation (d) can be done by someone using a generic computer to input a new prompt into a LLM. Under its broadest reasonable interpretation when read in light of the specification, the actions to "generate," "receive," and "submit" encompass mental processes practically performed in the human mind by evaluation and judgement using pen and paper or a generic computer. Accordingly, the claim recites an abstract idea (Step 2A, Prong One).
The judicial exception is not integrated into a practical application. In particular, the claim recites additional elements of "(e) receive natural language input from a user relating to content of a document in a user interface of an application", "(f) cause display of the completion in association with the natural language input in the user interface", and "(g) foundation model ". The limitations, (e) - (f), are mere data gathering and outputting recited at a high level of generality, and thus is insignificant extra-solution activity. In addition, all uses of the recited judicial exception require such data gathering and outputting, and, as such, this limitation does not impose any meaningful limits on the claim. This limitation amounts to necessary data gathering and outputting. Further, limitations (a) - (f) are recited as being performed by a computer. In limitations (e) - (f), the computer is used as a tool to perform the generic computer function of receiving and outputting data. In limitations (a) - (d), the computer is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. The limitation (g) provides nothing more than mere instructions to implement an abstract idea on a generic computer. The foundation model recited in limitation (g) is used to perform limitations (a) - (d) without placing any limits on how the model functions. Rather, this model only recites the outcomes and does not include any details on how the outcomes are accomplished. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to an abstract idea (Step 2A: YES).
The claim does not include additional elements that are sufficient to amount to more than the judicial exception. As discussed above, the recitation of a computer to perform limitations (a) - (f) amounts to no more than mere instructions to apply the exception using a generic computer component. Also as discussed above, limitations (e) - (f) are recited at a high level of generality. These elements amount to receiving and outputting text data involving a LLM, which is well understood, routine, and conventional activity, as supported by paragraph [0002] of applicant’s specification. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept (Step 2B).
Regarding claims 9 and 17, the claims are rejected with similar analysis to claim 1.
Similarly, dependent claims 2-8, 10-16, and 18-20 include additional steps that are considered abstract ideas because they fail to provide meaningful significance that goes beyond generally linking the use of an abstract idea to a particular technological environment and using the computer to perform an abstract idea.
Claims 2, 10, and 18 read on someone using a generic computer to receive an output from ChatGPT and using that information to fill a document.
Claims 3, 11, and 19 read on someone using a generic computer to determine the time for ChatGPT to respond to a prompt.
Claims 4, 12, and 20 read on someone determining a duration of time that ChatGPT takes to respond to a prompt, and keeping or discarding the output from ChatGPT based on elapsed time.
Claims 5 and 13 read on someone determining a prompt containing context and task information and using a generic computer to input it into ChatGPT, and then discarding the output from ChatGPT based on the time it takes to respond to the prompt.
Claims 6 and 14 read on someone determining if an output from ChatGPT is suitable.
Claims 7 and 15 read on someone determining context information from a document to include as part of the prompt based on a desired task.
Claims 8 and 16 read on someone using a generic computer to submit a prompt to ChatGPT after determining a trigger event has occurred.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 7, 9-10, 15, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 20250007870 A1; hereinafter referred to as Kim) in view of Arnold et al. (US 20180101599 A1; hereinafter referred to as Arnold) and Maschmeyer et al. (US 20240320444 A1; hereinafter referred to as Maschmeyer) and
Regarding claim 1, Kim teaches: a computing apparatus comprising: one or more computer readable storage media; one or more processors operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when executed by the one or more processors ([0385] The electronic device 3400 can include one or more of a processing unit 3402, a memory 3404 or storage device, input devices 3406, a display 3408, output devices 3410, and a power source 3412. In some cases, various implementations of the electronic device 3400 may lack some or all of these components and/or include additional or alternative components), direct the computing apparatus to at least: receive natural language input from a user relating to content of a document ([0098] For example, if a user requires a summary of a particular document, the user input prompt may be a text string comprising the phrase "generate a summary of this page.”) in a user interface of an application ([0209] FIGS. 2A-2B each depict example frontend interfaces that can interact with a system as described herein to receive prompts from a user that can be provided as input to a generative output engine as described herein);
generate a first prompt to elicit a reply from a foundation model, wherein the first prompt elicits from the foundation model at least a completion to the natural language input ([0067] an LLM is configured to determine what word, phrase, number, whitespace, nonalphanumeric character, or punctuation is most statistically likely to be next in a sequence, given the context of the sequence itself. The sequence may be initialized by the input prompt provided to the LLM. In this manner, output of an LLM is a continuation of the sequence of words, characters, numbers, whitespace, and formatting provided as the prompt input to the LLM), wherein the first prompt includes at least a portion of the natural language input ([0059] The string prompt (or "input prompt" or simply "prompt') received as input by a generative output engine can be any suitably formatted string of characters, in any natural language or text encoding), a task associated with the natural language input ([0098] a particular engineered prompt template can be selected based on a desired task for which output of the generative output engine may be useful to assist), and context information associated with the document ([0099] the preconditioning software instance can be further configured to insert one or more additional contextual terms or phrases into the user input);
receive the reply to the first prompt from the foundation model ([0067] an LLM is configured to determine what word, phrase, number, whitespace, nonalphanumeric character, or punctuation is most statistically likely to be next in a sequence, given the context of the sequence itself. The sequence may be initialized by the input prompt provided to the LLM. In this manner, output of an LLM is a continuation of the sequence of words, characters, numbers, whitespace, and formatting provided as the prompt input to the LLM), wherein the reply comprises the completion to the natural language input... ([0070] For example, the grammatically incomplete prompt of “can a computer” invites completion, but also represents an initial phrase that can begin a near limitless number of probabilistically reasonable next words, phrases, punctuation and whitespace. A generative output engine may not provide a contextually interesting or useful response to such an input prompt, effectively choosing a continuation at random from a set of generated continuations of the grammatically incomplete prompt);
cause display of the completion in association with the natural language input in the user interface... ([0261-0262] FIG. 6B depicts an example generative response 660 that is displayed in a preview window 650. In the present example, the generative response 660 includes a list of brainstorming topics or items that are generated in response to the “brainstorming” action and a proposed topic provided in the command prompt interface 602… at least a portion of the generative response 600 may be used as part of a subsequent prompt resulting in a modified or second generative response).
Kim does not explicitly, but Arnold teaches: wherein the completion is a suggested continuation of the natural language input that augments the natural language input… ([0128] If the user has not entered one or more letters of a partial word, then the first selectable word in completion suggestion control 420 will simply be a full word. In this case, this full word may be optionally highlighted to show that it is the next word that will be appended to the text in text input region 410 if it, or any subsequent word in the completion suggestion control, is selected by the user);
resulting in revised natural language input comprising the natural language input appended with the completion as a single input string… ([0129] the completion suggestion control 420 includes a sequence of user selectable words (425 through 480) of the multi-word text completion suggestion. User selection of any of the sequence of user selectable words (425 through 480) causes selection of both the selected word and all preceding words of the multi-word text completion suggestion presented via the completion suggestion control 420. As such, selection of any of these words (425 through 480) selects that word and all preceding words, and appends those selected words to the end of the existing text in the text input region 410).
Kim and Arnold are considered analogous in the field of natural language processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Kim to combine the teachings of Arnold because doing so would allow a user to choose specific suggested completions of a natural language input in order to form new text for use or further processing, leading to improved and more efficient content generation (Arnold [0004] Advantageously, the use of these multi-word text completion suggestions significantly reduces user workload and time spent in generating a wide variety of document types while possibly also improving the quality of those documents).
The combination of Kim and Arnold does not explicitly, but Maschmeyer teaches: receive user input comprising an indication to combine the natural language input with the completion… ([0119] the computing system receives, via the user interface, user input for editing and/or changing a portion of the at least one output. The edits/changes may be used to indicate the user's preferences with respect to the content of the selected portions. User input for editing/changing may comprise at least one of: deletion of a portion of an output, replacement of a portion of an output, or addition of text or image);
and submit, to the foundation model, a second prompt ([0099] The second text prompt may be an input prompt to the generative model for a subsequent iteration of content generation. That is, the generative model may produce new content based on the second text prompt. The first text prompt, e.g., the initial prompt supplied by the user, may be modified so that the second text prompt reflects the user's preferences as indicated by the user-selected portions from the at least one output. In particular, the second text prompt may retain the original intent and context associated with the first text prompt while also representing preference data supplied by the user through their selection(s)) comprising the revised natural language input ([0120] the second text prompt may be obtained by modifying the first text prompt to reflect both the user selection and the user edits).
Kim, Arnold, and Maschmeyer are considered analogous in the field of natural language processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Kim and Arnold to combine the teachings of Maschmeyer because doing so would allow for a user to edit outputs of a foundation model to add specific new text and use the edited output as part of another prompt to input into the foundation model, increasing user flexibility in getting a desired output (Maschmeyer [0120] the second text prompt may be obtained by modifying the first text prompt to reflect both the user selection and the user edits. The first text prompt, i.e., the initial prompt supplied by the user, may be modified so that the second text prompt reflects the user's preferences as indicated by the user selection and the user edits of the at least one output. In particular, the second text prompt may retain the original intent and context associated with the first text prompt while also representing preference data supplied by the user through the selections and edits).
Regarding claim 2, the combination of Kim, Arnold, and Maschmeyer teaches: the computing apparatus of claim 1. Maschmeyer further teaches: wherein the program instructions further direct the computing apparatus to: receive a second reply generated by the foundation model in response to the second prompt ([0122] the computing system may then request for the generative model to produce a second output based on the second text prompt);
and populate the document with content from the second reply according to the task ([0103] The computing system may update the sandbox region to include portions of different outputs that are selected by the user. In particular, the sandbox region may be populated dynamically as the user selects portions of outputs that are displayed in the output display area).
Kim, Arnold, and Maschmeyer are considered analogous in the field of natural language processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Kim, Arnold, and Maschmeyer to further combine the teachings of Maschmeyer because doing so would allow to easily use content from the output of the foundation model to fill some type of digital container, increasing user flexibility in generating content (Maschmeyer [0085] the sandbox area may comprise a canvas or textbox that is updated to represent a user's selections across multiple content outputs. In particular, the sandbox area may be gradually populated as the user makes selections of portions from different content outputs that are displayed in the output display area. In this way, when the user is finished making their selections, the totality of the selected portions may be displayed in the sandbox area).
Regarding claim 7, the combination of Kim, Arnold, and Maschmeyer teaches: the computing apparatus of claim 1. Kim further teaches: wherein the context information includes a portion of the content from the document selected based on the task associated with the natural language input ([0078] an LLM generated output can convert static content to dynamic content. In one example, a user-generated document can include a string that contextually references another software platform. For example, a documentation platform document may include the string "this document corresponds to project ID 123456, status of which is pending." In this example, a suitable LLM prompt may be provided that causes the LLM to determine an association between the documentation platform and a project management platform based on the reference to "project ID 123456.").
Regarding claims 9 and 17, they recite similar limitations as claim 1 and therefore are rejected similarly.
Regarding claims 10 and 18, they recites similar limitations as claim 2 and therefore are rejected similarly.
Regarding claim 15, it recites similar limitations as claim 7 and therefore is rejected similarly.
Claims 3-4, 8, 11, 16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Arnold and Maschmeyer, as applied to claims 1-2, 7, 9-10, 15, and 17-18 above, and further in view of Jain et al. (US 20240256582 A1; hereinafter referred to as Jain).
Regarding claim 3, the combination of Kim, Arnold, and Maschmeyer teaches: the computing apparatus of claim 2. The combination of Kim, Arnold, and Maschmeyer does not explicitly, but Jain teaches: wherein the program instructions further direct the computing apparatus to track an elapsed time from when the first prompt is submitted to the foundation model to when the completion is ready for display in the user interface ([0021] as the time to generate a response using a generative Al model may exceed a threshold latency (e.g., the response may take more than two seconds to generate), the response (e.g., a summary of search results) may be generated in the background and stored in a frequently asked questions (FAQ) database while the search results themselves are displayed in near real time).
Kim, Arnold, Maschmeyer, and Jain are considered analogous in the field of natural language processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Kim, Arnold, and Maschmeyer to combine the teachings of Jain because doing so would improve user experience by tracking the time it takes for output of a foundation model to be generated in order to determine what output should be shown to the user based on the elapsed time (Jain [0023] The technical benefits of immediately displaying search results and then generating a more comprehensive summary of the search results in the background using a generative Al model include improved user experience and reduced latency when retrieving the more comprehensive summary of the search results during subsequent searches that involve the search results).
Regarding claim 4, the combination of Kim, Arnold, Maschmeyer, and Jain teaches: the computing apparatus of claim 3. Jain further teaches: wherein to cause display of the completion in the user interface, the program instructions direct the computing apparatus to cause display of the completion in the user interface when the elapsed time is less than a threshold value ([0086] a maximum latency for generating the answer summary is determined... The maximum snippet size may be set such that the answer summary may be generated in less time than the maximum latency).
Regarding claim 8, the combination of Kim, Arnold, and Maschmeyer teaches: the computing apparatus of claim 1. The combination of Kim, Arnold, and Maschmeyer does not explicitly, but Jain teaches: wherein the program instructions further direct the computing apparatus to submit the first prompt to the foundation model when the computing apparatus detects a triggering event while receiving the natural language input in the user interface ([0022] a search and knowledge management system may not utilize generative Al when a search query is submitted until it detects a triggering condition, such as that a threshold number of end users have asked a semantically similar question or that the semantically similar question has been asked a threshold number of times. Upon detection of the triggering condition, a "simulated” search for the search query may be performed using "generic" user permissions set based on the user permissions of the end users that asked the semantically similar question (e.g., the user permissions may be set as the most restrictive permissions out of the end users). The set of search results generated may be provided with a prompt to a generative Al model in order to generate a summary of the set of search results).
Kim, Arnold, Maschmeyer, and Jain are considered analogous in the field of natural language processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Kim, Arnold, and Maschmeyer to combine the teachings of Jain because doing so would reduce downtime of the foundation model and improve user experience by having prompts submitted under certain circumstances (Jain [0004] the technical benefits of the systems and methods disclosed herein include reduced energy consumption and cost of computing resources, reduced search system downtime, increased quality of search results, increased reliability of information provided to search users, and improved search system performance).
Regarding claims 11 and 19, they recite similar limitations as claim 3 and therefore are rejected similarly. similarly.
Regarding claim 16, it recites similar limitations as claim 8 and therefore is rejected similarly.
Regarding claim 20, it recites similar limitations as claim 4 and therefore is rejected similarly.
Claims 5-6 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Arnold, Maschmeyer, and Jain, as applied to claims 3-4, 8, 11, 16, and 19-20 above, and further in view of Kakodkar et al. (US 20240428003 A1; hereinafter referred to as Kakodkar).
Regarding claim 5, the combination of Kim, Arnold, Maschmeyer, and Jain teaches: the computing apparatus of claim 4. Kim further teaches: wherein the program instructions further direct the computing apparatus to: receive a second natural language input in the user interface ([0334] the system is configured to generate a prompt using at least a portion of the natural language user input);
generate a third prompt ([0338] multiple prompts may be constructed, as described earlier, each prompt corresponding to a different portion of the natural language input) to elicit a third reply from the foundation model, wherein the third prompt elicits from the foundation model at least a completion to the second natural language input ([0067] an LLM is configured to determine what word, phrase, number, whitespace, nonalphanumeric character, or punctuation is most statistically likely to be next in a sequence, given the context of the sequence itself. The sequence may be initialized by the input prompt provided to the LLM. In this manner, output of an LLM is a continuation of the sequence of words, characters, numbers, whitespace, and formatting provided as the prompt input to the LLM), a second task associated with the second natural language input ([0098] a particular engineered prompt template can be selected based on a desired task for which output of the generative output engine may be useful to assist), and the context information associated with the document ([0290] the one or more prompts may include predefined query prompt text that is adapted for one or more of: a project content type, a knowledge base or knowledge base documentation content type, a user or product profile content type, a blog or journal content type, a meeting notes content type, a code summary or code documentation content type, or other content type);
and receive the third reply to the third prompt from the foundation model, wherein the third reply comprises the completion to the second natural language input... ([0067] an LLM is configured to determine what word, phrase, number, whitespace, nonalphanumeric character, or punctuation is most statistically likely to be next in a sequence, given the context of the sequence itself. The sequence may be initialized by the input prompt provided to the LLM. In this manner, output of an LLM is a continuation of the sequence of words, characters, numbers, whitespace, and formatting provided as the prompt input to the LLM).
Jain further teaches: determine that a second elapsed time from when the third prompt was submitted to the foundation model to when the completion was ready for display exceeds the threshold value... ([0021] as the time to generate a response using a generative Al model may exceed a threshold latency (e.g., the response may take more than two seconds to generate), the response (e.g., a summary of search results) may be generated in the background and stored in a frequently asked questions (FAQ) database while the search results themselves are displayed in near real time).
The combination of Kim, Arnold, Maschmeyer, and Jain does not explicitly, but Kakodkar teaches: and discard the completion based on the second elapsed time exceeding the threshold value ([0096] in these implementations, machine-learning application may apply a time threshold for applying individual trained models (e.g., 0.5 ms) and utilize only those individual outputs that are available within the time threshold. Outputs that are not received within the time threshold may not be utilized, e.g., discarded).
Kim, Arnold, Maschmeyer, Jain, and Kakodkar are considered analogous in the field of natural language processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Kim, Arnold, Maschmeyer, and Jain to combine the teachings of Kakodkar because doing so would save resource usage by discarding a foundation model output that would take too long to compute (Kakodkar [0096] Outputs that are not received within the time threshold may not be utilized, e.g., discarded. For example, such approaches may be suitable when there is a time limit specified while invoking the machine-learning application, e.g., by operating system 608 or one or more applications 612).
Regarding claim 6, the combination of Kim, Arnold, Maschmeyer, Jain, and Kakodkar teaches: the computing apparatus of claim 5. Arnold further teaches: wherein the program instructions further direct the computing apparatus to evaluate the completion for suitability ([0063] one or more fully automatic evaluations, each based on a large corpus of publicly available documents from a single author, were considered to determine which suggested completions were acceptable).
Regarding claim 12, the combination of Kim, Arnold, Maschmeyer, and Jain teaches: the method of claim 11. Jain further teaches: wherein causing display of the completion in the user interface comprises causing display of the completion in the user interface when the elapsed time is less than a threshold value... ([0086] a maximum latency for generating the answer summary is determined... The maximum snippet size may be set such that the answer summary may be generated in less time than the maximum latency).
The combination of Kim, Arnold, Maschmeyer, and Jain does not explicitly, but Kakodkar teaches: and discarding the completion when the elapsed time is greater than the threshold value ([0096] in these implementations, machine-learning application may apply a time threshold for applying individual trained models (e.g., 0.5 ms) and utilize only those individual outputs that are available within the time threshold. Outputs that are not received within the time threshold may not be utilized, e.g., discarded).
Kim, Arnold, Maschmeyer, Jain, and Kakodkar are considered analogous in the field of natural language processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Kim, Arnold, Maschmeyer, and Jain to combine the teachings of Kakodkar because doing so would save resource usage by discarding a foundation model output that would take too long to compute (Kakodkar [0096] Outputs that are not received within the time threshold may not be utilized, e.g., discarded. For example, such approaches may be suitable when there is a time limit specified while invoking the machine-learning application, e.g., by operating system 608 or one or more applications 612).
Regarding claim 13, it recites similar limitations as claim 5 and therefore is rejected similarly.
Regarding claim 14, it recites similar limitations as claim 6 and therefore is rejected similarly.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nathan Tengbumroong whose telephone number is (703)756-1725. The examiner can normally be reached Monday - Friday, 11:30 am - 8:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NATHAN TENGBUMROONG/Examiner, Art Unit 2654
/HAI PHAN/Supervisory Patent Examiner, Art Unit 2654