DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to argument
Applicant's arguments filed 10/10/2025 ("Arguments/Remarks") have been fully considered but they are not persuasive.
Applicants response regards to Claim Rejections under 35 U.S.C. § 103 (pg. 10 – 11), with respect to amended claim(s) have been considered but are moot, because arguments/remarks are directed to amended claim limitations that were not previously examined by the examiner. The rejections are noted in the current office action to address amended claim limitations.
Argument – 1: (pg. 15) Applicant contends: “Based on the above, Applicant respectfully submits that the claims, as amended, provide an improvement to computer functionality. For example, the claims provide a technique for solving limitations associated with user-generated workflows. In particular, such limitations, and Applicant's solutions, are described in [0013], [0024], and [0029] of the specification:”
Regarding the above argument, the Examiner respectfully disagrees with Applicant’s assertion that the amended limitations provide an improvement to computer functionality. The alleged improvements in paragraphs ([0013], [0024] and [0029]) are not reflected in the claim. While the specification discusses benefits such as reducing the learning curve associated with graphical user interfaces, improving workflow generation for inexperienced users, and simplifying model deployment through a text to text architecture, the claim does not recite limitations that implement or achieve these alleged improvements. In particular, the claim does not recite any particular mechanism or specific manner for achieving such improvements. Accordingly, the claimed invention does not reflect the asserted technological improvements described in the specification.
Argument – 2: (pg. 16 – 17) Applicant contends: “Additionally, the techniques disclosed in Applicant's specification propose a model that decreases computing time. The text-to-text architecture does not redo performance or hardware compatibility tests, which helps keep response times below applications' lower bounds (something that other machine learning models may struggle with). Further, the text-to-text …”
Regarding the above argument, the Examiner respectfully notes that while Applicant asserts that the disclosed text to text architecture decrease computing time, improves scalability and avoids repeated performance or hardware compatibility testing, the independent claims do not recite limitations that meaningfully implement these alleged improvements. Although Applicant contends that the claims explicitly recite the text to text architecture described in paragraphs ([0013], [0024] and [0029]), the claims do not set fourth any particular mechanism or specific manner by which the architecture achieves reduced computing time, improved scalability or avoid hardware compatibility testing. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Applicant's arguments filed 10/10/2025 ("Arguments/Remarks") have been fully considered but they are not persuasive.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1 – 20 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more.
In step 1, of the 101-analysis set forth in the MPEP 2106, the examiner has determined
that the following limitations recite a process that, under the broadest reasonable interpretation, falls within one or more statutory categories (processes).
In step 2A prong 1, of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components:
Regarding claim 1 and analogous claim 20:
generating text descriptive of at least a portion of the partially specified computerized workflow;
(i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves creating and organizing a text descriptive information about a workflow, see (MPEP 2106.04)).
to determine an output text descriptive of the one or more additional steps to be added;
(i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves making a determination about modifying or extending a workflow by adding additional steps, see (MPEP 2106.04)).
If the claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process, but for the recitation of generic computer components, then it falls within the mental process. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2 of the 101-analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
receiving an indication to predict one or more additional steps to be added to a partially specified computerized workflow based at least in part on the partially specified computerized workflow;
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))).
providing, to a machine learning model, context information comprising application metadata …
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))).
… wherein the machine learning model is a text-to-text pre-trained model:
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation that simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)).
providing to the machine learning model, machine learning inputs based at least in part on the descriptive text
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))).
wherein the output text is based on the context information and the machine learning inputs;
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation that simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)).
using one or more processors to automatically implement the one or more additional steps to be added to the partially specified computerized workflow.
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f).)
In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception:
Regarding limitation (VIII) recites mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f).
Regarding limitations (III, IV and VI), additional elements considered extra/post solution activity, as analyzed above, are activity that are well-understood routine and conventional, specifically: the courts have recognized the computer functions as well‐understood, routine, and conventional functions.
Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II).
Regarding limitation (V and VII), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h).
As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea.
Regarding claim 19,
generating text descriptive of at least a portion of the partially specified computerized workflow;
(i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves creating and organizing a text descriptive information about a workflow, see (MPEP 2106.04)).
to determine an output text descriptive of the one or more additional steps to be added;
(i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves making a determination about modifying or extending a workflow by adding additional steps, see (MPEP 2106.04)).
If the claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process, but for the recitation of generic computer components, then it falls within the mental process. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2 of the 101-analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
receiving an indication to predict one or more additional steps to be added to a partially specified computerized workflow based at least in part on the partially specified computerized workflow;
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))).
providing, to a machine learning model, context information comprising application metadata
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))).
wherein the machine learning model is a text-to-text pre-trained model;
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation that simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)).
providing to the machine learning model, machine learning inputs based at least in part on the descriptive text
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))).
wherein the output text is based on the context information and the machine learning inputs;
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation that simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)).
using one or more processors to automatically implement the one or more additional steps to be added to the partially specified computerized workflow.
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f).)
one or more processors configured to:
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f).)
a memory coupled to at least one of the one or more processors and configured to provide at least one of the one or more processors with instructions.
(i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f).)
In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception:
Regarding limitation (VIII, IX and X) recites mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f).
Regarding limitations (III, IV and VI), additional elements considered extra/post solution activity, as analyzed above, are activity that are well-understood routine and conventional, specifically: the courts have recognized the computer functions as well‐understood, routine, and conventional functions.
Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II).
Regarding limitation (V and VII), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h).
As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea.
Regarding claim 2, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the indication to predict the one or more additional steps to be added is generated by a user via a graphical user interface
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 3, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the partially specified computerized workflow has at least in part been specified manually by a user via a graphical user interface.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 4, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the partially specified computerized workflow has at least in part been generated automatically
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 5, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the one or more additional steps to be added belong to an enumerated collection of available steps that the machine learning model is permitted to output.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 6, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein generating the descriptive text includes converting data in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format to a text format.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 7, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the text-to-text pre-trained model is a large language model (LLM) that has an Encoder-Decoder architecture.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 8, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the machine learning model has been pre-trained on a dataset and then fine-tuned for a prediction task.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 9, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the machine learning model has been trained based at least in part on a plurality of training instances of synthetically generated training data.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 10, dependent upon claim 9 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein at least one training instance of the plurality of training instances comprises a flow representation that is divided into an initial steps portion and an additional steps portion at a randomly selected split point.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 11, dependent upon claim 9 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the plurality of training instances is comprised of flow representations of different lengths.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 12, dependent upon claim 11 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein at least one flow representation of the flow representations of different lengths is comprised of flow steps selected according to a statistical distribution of flow steps.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 13, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the text-to-text pre-trained model outputs a confidence score associated with the output text.
The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 14, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
further comprising providing to the machine learning model, a selection of a tensor data object from a list of tensor data objects,
The recitation in the additional limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity and well-understood routine and conventional (2106.05(d)).
Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II).
The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above.
wherein each tensor data object of the list of tensor data objects is associated with different model weights for the machine learning model.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 15, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein using the one or more processors to automatically implement the one or more additional steps to be added includes causing the one or more processors to convert a text format prediction to one or more application programming interface messages.
- Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f).
- Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 16, dependent upon claim 15 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein using the one or more processors to automatically implement the one or more additional steps to be added further includes transmitting the one or more application programming interface messages to an application configured to generate computerized workflow steps.
- Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f).
- Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 17, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
wherein the partially specified computerized workflow includes a trigger condition and at least one action step that is configured to execute in response to a determination that the trigger condition has occurred.
- The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h).
- Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Regarding claim 18, dependent upon claim 1 and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites:
further comprising displaying in a graphical user interface a computerized workflow that combines the partially specified computerized workflow and the one or more additional steps to be added.
- The recitation in the additional limitation directed to mere data outputting as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity and well-understood routine and conventional.
- The additional limitations as analyze failed to integrate a judicial exception into a practical application at Step 2A and provide an inventive concept in Step 2B, per the analysis above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 – 4, 8, 13 and 18 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kalluri et al., Pub. No.: US20210264251A1 in view of Bowers et al., Pub. No.: US20160358103A1, Larsen et al., Pub. No.: US20200118058A1 and Rafferty et al., Pub. No.: US11551171B2.
Regarding claim 1, Kalluri teaches: A method, comprising: receiving an indication to predict one or more additional steps to be added to a partially specified computerized workflow based at least in part on the partially specified computerized workflow;
(Kalluri, “[0049] Partial workflow predictor 250 may be any server, processor and/or database configured to generate recommendations of one or more tasks to complete a communication workflow. Cloud-based application 120 may generate an interface that enables a user to create a communication workflow. If the user has created a partial communication workflow [based at least in part on the partially specified computerized workflow], (e.g., the user has not yet set an end node), then the partial workflow predictor 250 may be configured to generate recommendations to the user of which tasks to add to complete the partial communication workflow [receiving an indication to predict one or more additional steps to be added to a partially specified computerized workflow]. Given a new partial communication workflow, the partial workflow predictor 250 may generate a composite feature vector of the new partial communication workflow based on the techniques described above. The partial workflow predictor 250 may then identify a set of partial portions of previously-executed communication workflows that have the same structure as the new partial communication workflow. The partial workflow predictor 250 may determine partial portions of previously-executed communication workflows that are similar to the new partial communication workflow based on a comparison of composite feature vectors in a domain space, as described above. The partial workflow predictor 250 may rank the set of partial portions of previously-executed communication workflows that are similar to the new partial communication workflow in decreasing order based on the known task outcomes of the previously-executed communication workflows. The partial workflow predictor 250 may then recommend the full previously-executed communication workflows of the one or more partial portions of previously-executed communication workflows that are ranked the highest. The recommendation may include remaining tasks of the partial portions of the previously-executed communication workflows.”)
Kalluri does not teach:
generating text descriptive of at least a portion of the partially specified computerized workflow;
providing to the machine learning model, machine learning inputs based at least in part on the descriptive text to determine an output text descriptive of the one or more additional steps to be added; and
using one or more processors to automatically implement the one or more additional steps to be added to the partially specified computerized workflow.
providing, to a machine learning model, context information comprising application metadata, wherein the machine learning model is a text-to-text pre-trained model:
wherein the output text is based on the context information and the machine learning inputs;
Bowers teaches:
generating text descriptive of at least a portion of the partially specified computerized workflow;
(Bowers, “[0028] A user of the platform can utilize the experiment management engine to author an experiment (e.g., an entirely new experiment or a modified experiment based on a previously defined experiment). The authorship of the experiment is defined through the workflow authoring tool, where the user defines the workflow as a set of “operators,” each with at least one input dataset and at least one output dataset. The user can define the input of one operator as the output of another operator. When the user finalizes the workflow definition, the workflow execution engine can traverse (e.g., examine individually in sequence) through the input and output linkages of the operators as a directed graph to infer the interdependencies amongst the operators. By parsing the workflow definition (e.g., text formatted according to a workflow definition language) [generating text descriptive of at least a portion of the partially specified computerized workflow], the workflow execution engine can:”)
providing to the machine learning model, machine learning inputs based at least in part on the descriptive text
(Bowers, “[0086] FIG. 8 is a flow chart illustrating a method 800 of operating a machine learning system (e.g., the machine learning system 200 of FIG. 2), in accordance with various embodiments. The machine learning system can be part of an application service system (e.g., the application service system 100 of FIG. 1). At step 802, the machine learning system can initialize a workflow run in a machine learning system [providing to the machine learning model, machine learning] by identifying a text string defining a workflow [inputs based at least in part on the descriptive text]. At step 804, the machine learning system can traverse syntax of the text string to determine an interdependency graph of one or more data processing operator instances of the workflow. The data processing operator instances are associated with one or more data processing operator types. The machine learning system can traverse the syntax in depth-first traversal or breadth-first traversal.”)
to determine an output text descriptive of the one or more additional steps to be added;
(Bowers, “[0028] A user of the platform can utilize the experiment management engine to author an experiment (e.g., an entirely new experiment or a modified experiment based on a previously defined experiment). The authorship of the experiment is defined through the workflow authoring tool, where the user defines the workflow as a set of “operators,” each with at least one input dataset and at least one output dataset. The user can define the input of one operator as the output of another operator. When the user finalizes the workflow definition, the workflow execution engine can traverse (e.g., examine individually in sequence) through the input and output linkages of the operators as a directed graph to infer the interdependencies amongst the operators. By parsing the workflow definition (e.g., text formatted according to a workflow definition language), the workflow execution engine can: determine one or more production or ephemeral code packages (e.g., code packages from different programming languages and/or libraries) [to determine an output text descriptive of the one or more additional steps to be added] required for the workflow based on the operator definitions; identify machines (e.g., physical devices or virtual devices) to run the operators according to resource constraints explicitly or implicitly defined in the workflow; determine one or more available parallelisms based on the inferred interdependencies; expunge redundant schedule of operators by checking against a memoization repository, schedule executions of the operators in the identified machines based on the available parallelisms; cache resulting output of the operators in the memoization repository; and render an experiment report based on the result of the scheduled executions and the rendering parameters defined by the workflow and/or at least one of the associated operators.”)
Bowers and Kalluri are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Bowers with teachings of Kalluri to enable prediction of tasks outcomes, recommends additional tasks to complete partial workflows, and generates vector representations for workflow. (Bowers, Abstract).
Kalluri in view Bowers do not teach:
using one or more processors to automatically implement the one or more additional steps to be added to the partially specified computerized workflow;
providing, to a machine learning model, context information comprising application metadata, wherein the machine learning model is a text-to-text pre-trained model:
wherein the output text is based on the context information and the machine learning inputs;
Larsen teaches:
• using one or more processors to automatically implement the one or more additional steps to be added to the partially specified computerized workflow
(Larsen, “[0035] The workflow tracking platform 102 may automatically generate a workflow or project and auto populate the workflow or project with one or more workflow items or tasks [using one or more processors to automatically implement the one or more additional steps]. Workflow items or tasks include, for example, a task to be completed [to be added to the partially specified computerized workflow], a sensor reading to be determined, a workorder agreement to be generated, a workorder agreement to be signed, a threshold sensor reading to be reached, a threshold time of working to be reached, a final product to be generated or provided, and so forth. In various embodiments the workflow items or tasks may be manually input by a user associated with any of the provider 106, the third party 108, or the consumer 110. Further, the workflow items or tasks may be automatically generated according to predefined rules or task completion requirements set by, for example, the provider 106. Further, the workflow items or tasks may be automatically generated according to one or more inputs received from an account. For example, an account may indicate to the workflow tracking platform 102 that a workflow or projects should be generated for a particular job type. The workflow tracking platform 102 may then automatically generate one or more workflow items or tasks to be completed according to predefined rules or task completion requirements for that particular job type.”)
Larsen, Kalluri and Bowers are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Larsen with teachings of Kalluri and Bowers to add automated, task verification and real-time tracking to a workflow management system in order to improve task accuracy and enable dynamic updates to task status within the workflow. (Larsen, Abstract).
Kalluri in view of Bowers and Larsen do not teach:
providing, to a machine learning model, context information comprising application metadata, wherein the machine learning model is a text-to-text pre-trained model:
wherein the output text is based on the context information and the machine learning inputs;
Rafferty teaches:
providing, to a machine learning model,
(Rafferty, (col.3 line [19 - 34]), “The workflow generation system may perform a sentiment analysis of the historical communication data to determine historical response data identifying whether the recipient responses of the historical communication data indicate approvals, rejections, or commentaries. The workflow generation system may train a machine learning model, with the historical workflow data and the historical response data [providing, to a machine learning model], to generate a trained machine learning model that determines proposed workflows. The workflow generation system may receive, from a particular client device, communication data identifying a communication created by a particular user of the particular client device, and process the communication response data, with the trained machine learning model, to determine whether a workflow is needed and which particular recipients are to be included in the workflow.”)
context information comprising application metadata
(Rafferty, (col. 4 line [7 – 23]), “As shown in FIG. 1A, and by reference number 105, the workflow generation system may receive, from client devices, historical communication data identifying historical communications, created by users of the client devices, that include historical recipients and historical recipient responses to the communications. The historical communications may include email communications (e.g., associated with an email application), instant messaging communications (e.g., associated with an instant messaging application), planning communications (e.g., associated with a planning application), telecommunications (e.g., associated with a telecommunications application), and/or the like that are sent by the client devices and received by the client devices. As an example, historical communication data associated with an email may include historical communications (e.g., content of the email, metadata associated with the email [context information comprising application metadata], such as a sender or a time stamp, and/or the like), historical recipients (e.g., recipients of the email or related emails), and historical recipient responses to the communications (e.g., responses to the email or related emails by recipients, a time stamp for each response, and/or the like).”)
wherein the machine learning model is a text-to-text pre-trained model:
(Rafferty, (col. 3 line [24 – 34]), “The workflow generation system may train a machine learning model, with the historical workflow data and the historical response data [wherein the machine learning model is a text-to-text pre-trained model], to generate a trained machine learning model that determines proposed workflows. The workflow generation system may receive, from a particular client device, communication data identifying a communication created by a particular user of the particular client device, and process the communication response data, with the trained machine learning model, to determine whether a workflow is needed and which particular recipients are to be included in the workflow.”)
wherein the output text is based on the context information and the machine learning inputs;
(Rafferty, (col. 14 line [45 – 60]), “As an example, the trained machine learning model 225 may predict a value of “approval chain instant message” for the target variable of “recommended workflow” for the new observation, as shown by reference number 235 [is based on the context information and the machine learning inputs]. Based on this prediction, the machine learning system may provide a first recommendation, may provide output for determination of a first recommendation, may perform a first automated action, may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), and/or the like [wherein the output text]. The first recommendation may include, for example, a recommendation to provide an approval chain instant message for an identified set of recipients in an identified order. The first automated action may include, for example, an action to automatically initiate the approval chain instant message for the identified set of recipients in the identified order.”)
Rafferty, Kalluri, Bowers and Larsen are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Rafferty with teachings of Kalluri, Bowers and Larsen to add workflow generation steps, identifying recipients, proposing workflows and requiring approval before execution to enable automated yet controlled workflow creation. (Rafferty, Abstract).
Regarding claim 19, Kalluri teaches: one or more processors configured to:
(Kalluri, “[0117] Computer system 1000 may comprise a storage subsystem 1018 that comprises software elements, shown as being currently located within a system memory 1010. System memory 1010 may store program instructions that are loadable and executable on processing unit 1004 [one or more processors], as well as data generated during the execution of these programs.”)
a memory coupled to at least one of the one or more processors and configured to provide at least one of the one or more processors with instructions.
(Kalluri, “[0117] Computer system 1000 may comprise a storage subsystem 1018 that comprises software elements, shown as being currently located within a system memory 1010. System memory 1010 [a memory] may store program instructions that are loadable and executable on processing unit 1004 [coupled to at least one of the one or more processors and configured to provide at least one of the one or more processors with instructions], as well as data generated during the execution of these programs.”)
The rest of the limitation is analogous to claim 1, so is rejected under similar rationale.
Regarding claim 20, Kalluri teaches: A computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for:
(Kalluri, “[0119] Storage subsystem 1018 may also provide a tangible computer-readable storage medium [A computer program product embodied in a non-transitory computer readable medium] for storing the basic programming and data constructs that provide the functionality of some embodiments. Software (programs, code modules, instructions) [comprising computer instructions] that when executed by a processor provide the functionality described above may be stored in storage subsystem 1018. These software modules or instructions may be executed by processing unit 1004. Storage subsystem 1018 may also provide a repository for storing data used in accordance with the present invention.”)
The rest of the limitation is analogous to claim 1, so is rejected under similar rationale.
Regarding claim 2, Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Bowers further teaches: wherein the indication to predict the one or more additional steps to be added is generated by a user via a graphical user interface.
(Bowers, “[0079] FIG. 7A is a block diagram illustrating a workflow run definition 700, in accordance with various embodiments. The workflow run definition 700 defines a workflow run. The workflow run can be associated with an experiment being run on the machine learning system. In some embodiments, the workflow run definition 700 [wherein the indication to predict the one or more additional steps to be added] can be constructed as a user interface [is generated by a user via a graphical user interface] receives one or more inputs from an operating user. In some embodiments, the workflow run definition 700 can be constructed based on a text string imported through the user interface or an API of a workflow authoring tool (e.g., the workflow authoring tool 126 of FIG. 1).”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Bowers with teachings of Kalluri, Larsen and Rafferty for the same reasons disclosed for claim 1.
Regarding claim 3, Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri further teaches: wherein the partially specified computerized workflow has at least in part been specified manually by a user via a graphical user interface.
(Kalluri, “[0049] Partial workflow predictor 250 may be any server, processor and/or database configured to generate recommendations of one or more tasks to complete a communication workflow. Cloud-based application 120 may generate an interface that enables a user to create a communication workflow [has at least in part been specified manually by a user via a graphical user interface]. If the user has created a partial communication workflow [wherein the partially specified computerized workflow], (e.g., the user has not yet set an end node), then the partial workflow predictor 250 may be configured to generate recommendations to the user of which tasks to add to complete the partial communication workflow. Given a new partial communication workflow, the partial workflow predictor 250 may generate a composite feature vector of the new partial communication workflow based on the techniques described above. The partial workflow predictor 250 may then identify a set of partial portions of previously-executed communication workflows that have the same structure as the new partial communication workflow. The partial workflow predictor 250 may determine partial portions of previously-executed communication workflows that are similar to the new partial communication workflow based on a comparison of composite feature vectors in a domain space, as described above. The partial workflow predictor 250 may rank the set of partial portions of previously-executed communication workflows that are similar to the new partial communication workflow in decreasing order based on the known task outcomes of the previously-executed communication workflows. The partial workflow predictor 250 may then recommend the full previously-executed communication workflows of the one or more partial portions of previously-executed communication workflows that are ranked the highest. The recommendation may include remaining tasks of the partial portions of the previously-executed communication workflows.”)
Regarding claim 4, Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Larsen further teaches: wherein the partially specified computerized workflow has at least in part been generated automatically.
(Larsen, “[0035] The workflow tracking platform 102 may automatically generate a workflow [has at least in part been generated automatically] or project and auto populate the workflow or project with one or more workflow items or tasks. Workflow items or tasks include, for example, a task to be completed [wherein the partially specified computerized workflow], a sensor reading to be determined, a workorder agreement to be generated, a workorder agreement to be signed, a threshold sensor reading to be reached, a threshold time of working to be reached, a final product to be generated or provided, and so forth. In various embodiments the workflow items or tasks may be manually input by a user associated with any of the provider 106, the third party 108, or the consumer 110. Further, the workflow items or tasks may be automatically generated according to predefined rules or task completion requirements set by, for example, the provider 106. Further, the workflow items or tasks may be automatically generated according to one or more inputs received from an account. For example, an account may indicate to the workflow tracking platform 102 that a workflow or projects should be generated for a particular job type. The workflow tracking platform 102 may then automatically generate one or more workflow items or tasks to be completed according to predefined rules or task completion requirements for that particular job type.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Larsen with teachings of Kalluri, Bowers and Rafferty for the same reasons disclosed for claim 1.
Regarding claim 8, Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Bowers further teaches: wherein the machine learning model has been pre-trained on a dataset and then fine-tuned for a prediction task.
(Bowers, “[0003] A typical machine learning workflow may include building a model from a sample dataset (referred to as a “training set”) [wherein the machine learning model has been pre-trained on a dataset], evaluating the model against one or more additional sample datasets (referred to as a “validation set” and/or a “test set”) [and then fine-tuned for a prediction task] to decide whether to keep the model and to benchmark how good the model is, and using the model in “production” to make predictions or decisions against live input data captured by an application service. The training set, the validation set, and/or the test set can respectively include pairs of input datasets and expected output datasets that correspond to the respective input datasets.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Bowers with teachings of Kalluri, Larsen and Rafferty for the same reasons disclosed for claim 1.
Regarding claim 13, Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Rafferty further teaches: wherein the text-to-text pre-trained model outputs a confidence score associated with the output text.
(Rafferty, (col. 9 line [15 – 28]), “As shown in FIG. 1G, and by reference number 135, the workflow generation system may generate a proposed workflow and a confidence score for the proposed workflow when the workflow is determined to be needed and based on the particular recipients [wherein the text-to-text pre-trained model outputs a confidence score associated with the output text]. The proposed workflow may define an approval chain for the communication created by the particular user. Although the communication is referred to herein as a singular communication when describing individual transmissions of the communication to different particular recipients, in practice a transmission of the communication to one recipient may vary in form and format, and/or may include additional information or less information compared to a transmission of the communication to another recipient.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Rafferty with teachings of Kalluri, Bowers and Larsen for the same reasons disclosed for claim 1.
Regarding claim 18, Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri further teaches: further comprising displaying in a graphical user interface a computerized workflow that combines the partially specified computerized workflow and the one or more additional steps to be added.
(Kalluri, “[0013] …generating a recommendation for completing the partial workflow, the recommendation including one or more recommended tasks [and the one or more additional steps to be added] that complete the partial workflow, the one or more recommended tasks being selected from one or more remaining tasks of a previously-executed partial workflow of the one or more previously-executed partial workflows [that combines the partially specified computerized workflow] that share the same structure with the partial workflow and that are determined to be similar to the partial workflow, and the selection being based on the previous performance values of the one or more previously-executed partial workflows that share the same structure with the partial workflow and that are determined to be similar to the partial workflow; and displaying the recommendation on the interface [further comprising displaying in a graphical user interface a computerized workflow]. Other embodiments of this aspect include corresponding computer systems, apparatus, and executable code or instructions (e.g., a computer-program product) stored on non-transitory computer-readable storage medium, each configured to perform the actions of the methods.”)
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty and in further view of Varela-Vaca et al., "Process mining to unleash variability management: discovering configuration workflows using logs.".
Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri in view of Bowers, Larsen and Rafferty do not teach:
wherein the one or more additional steps to be added belong to an enumerated collection of available steps that the machine learning model is permitted to output.
Varela-Vaca teaches:
wherein the one or more additional steps to be added belong to an enumerated collection of available steps that the machine learning model is permitted to output
(Varela-Vaca, page: 3 – 4, “Process mining is an important topic that has been well received by the enterprises, bringing about the evolution of the research solution tools (e.g., ProM [64]) to commercial solutions (e.g., Disco™ and Celonis™). This facilitates its applicability to several contexts and areas, although variability has been out of the scope of these techniques before this paper. Process discovery in process mining uses a set of traces similar to the configuration log shown in Figure 3, to obtain a model that covers the possible traces. Figure 4 shows the process discovered by Disco tool-suite, which covers every possibility configuration trace [wherein the one or more additional steps to be added belong to an enumerated collection of available steps]. The relational patterns among the definition of the features become part of the model. For example, two features can be the first in the traces (CRM or Project management) or after CRM always Task List is selected [that the machine learning model is permitted to output]. Figure 4 also shows the number of traces that are represented by each transition, giving information about the importance o each part of the traces in the obtained model.”)
Varela-Vaca, Kalluri, Bowers, Larsen and Rafferty are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Varela-Vaca with teachings of Kalluri, Bowers, Larsen and Rafferty to improve efficiency, personalize the user experience, and reduce configuration errors. (Varela-Vaca, Abstract).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty and in further view of Kubota et al., Pub. No. US20050235077A1.
Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri in view of Bowers, Larsen and Rafferty do not teach:
wherein generating the descriptive text includes converting data in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format to a text format.
Kubota teaches:
wherein generating the descriptive text includes converting data in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format to a text format.
(Kubota, “[0089] In the step S510, the conversion server 12 converts the received file list data in XML format [wherein generating the descriptive text includes converting data in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format] into data in text format [to a text format] from which tug information is eliminated to send the converted data to the PC1.”)
Kubota, Kalluri, Bowers, Larsen and Rafferty are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Kubota with teachings of Kalluri, Bowers, Larsen and Rafferty to improve cross-platform compatibility, reducing user burden, and enable seamless task execution and coordination in distributed environment. (Kubota, Abstract).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty and in further view of Bui et al., Pub. No. US20220114476A1.
Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri in view of Bowers, Larsen and Rafferty do not teach:
wherein the text-to-text is a large language model (LLM)
that has an Encoder-Decoder architecture.
Bui teaches:
wherein the text-to-text is a large language model (LLM)
(Bui, [0027] As noted above, the text sequence labeling system trains and utilizes a text sequence labeling model (e.g., a machine-learning model) [wherein the text-to-text pre-trained model is a large language model (LLM)]. In one or more implementations, the text sequence labeling model includes a bidirectional long-short-term memory (BiLSTM) layer. In some implementations, the text sequence labeling model includes a transformer-based encoder, a dense layer, and/or a conditional random field (CRF) layer. Additional detail regarding example architectures of the teacher model is provided below.”)
pre-trained model.
(Bui, “[0095] In one or more implementations, the text sequence labeling system 106 utilizes the training data 500 to train the teacher model 108. For example, the text sequence labeling system 106 creates the teacher model 108 by initializing the first set of model parameters 610 with a set of default or random model parameters. In some implementations, the text sequence labeling system 106 utilizes a pre-trained set of parameters to initialize the teacher model 108 [pre-trained model].”)
that has an Encoder-Decoder architecture.
(Bui, “[0096] Next, in various implementations, the text sequence labeling system 106 begins processing the text data 504 from the labeled data set 502 at the teacher model 108. For instance, the teacher model 108 encodes the text data 504 (e.g., utilizing a transformer-based encoder) [that has an Encoder], processes the encoded data through various layers of the teacher model 108 (e.g., utilizing the Bi-LSTM model and dense layer), then decodes the processed data (e.g., utilizing the CRF) [Decoder architecture]. Indeed, in example implementations, the teacher model 108 generates predicted text sequence labels 612 corresponding to the inputted text data 504.”)
Bui, Kalluri, Bowers, Larsen and Rafferty are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Bui with teachings of Kalluri, Bowers, Larsen and Rafferty to result a more accurate, efficient and robust model for tasks sequence labeling, (Bui, Abstract).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty and in further view of Martin et al., Pub. No. US20210209509A1.
Regarding claim 9, Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri in view of Bowers, Larsen and Rafferty do not teach:
wherein the machine learning model has been trained based at least in part on a plurality of training instances of synthetically generated training data.
Martin teaches:
wherein the machine learning model has been trained based at least in part on a plurality of training instances of synthetically generated training data
(Martin, “[0013] One embodiment comprises a computer-implemented method for guided synthesis of training data, including receiving a set of input data from a data store, the input data comprising training data for a machine learning process. The method transforms the set of input data, by a generator process, to generate a set of output data. An assessor process provides an assessment of the set of output data against a set of characteristics to determine whether the set of characteristics are met by the set of output data. In some embodiments, the assessor processor is a human process that supports the interaction with a human specialist. The output data can be augmented by the generator process, based on the assessment provided by the assessor process to generate a set of synthetic training data [wherein the machine learning model has been trained based at least in part on a plurality of training instances of synthetically generated training data].”)
Martin, Kalluri, Bowers, Larsen and Rafferty are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Martin with teachings of Kalluri, Bowers, Larsen and Rafferty to add scalable, high-quality synthetic data generation to the system, enabling the creation of diverse and application-specific training datasets (Martin, Abstract).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty, Martin and in further view of Weng et al., Pub. No. US8019594B2.
Kalluri in view of Bowers, Larsen, Rafferty and Martin teach the method of claim 9.
Kalluri in view of Bowers, Larsen, Rafferty and Martin do not teach:
wherein at least one training instance of the plurality of training instances comprises a flow representation that is divided into an initial steps portion and an additional steps portion at a randomly selected split point.
Weng teaches:
wherein at least one training instance of the plurality of training instances comprises a flow representation that is divided into an initial steps portion
(Weng, col. 7 lines[17 – 27], “As illustrated in FIG. 3, the original feature space is initially split into a number of different feature sets [wherein at least one training instance of the plurality of training instances], these feature sets are then each processed by a feature selection algorithm to produce a number of feature subsets. The feature subsets are then processed by a merge-split function to generate a subsequent number of feature sets. If the initial feature space [comprises a flow representation that is divided into an initial steps portion] is thought of as a merged set, it can be seen from FIG. 3 that the entire PFS method comprises a number of merge-split-select operations performed on a successively smaller number of features.”)
and an additional steps portion at a randomly selected split point.
(Weng, col. 7 lines[60 – 67] – col. 8 lines[1- 14] “In one embodiment, two different types of splitting methods can be used to generate a subsequent set of feature sets after a merge operation. These two methods are a random split strategy, and a dimension-based split strategy. In the random split strategy, a feature space is randomly split [and an additional steps portion at a randomly selected split point] into a number of disjoint subspaces. An equal number of features is selected for each new feature set. In the dimension-based split strategy, a feature space is split into disjoint subspaces based on feature dimension/variables. The number of features for each new feature set is determined on the basis of certain distributions. The dimensions can be any appropriate characteristic common to a significant number of features within the feature space. For example, in a natural language processing application, the dimension for spoken input can be word-based, POS Tag-based, prosody-based, and so on. In the case of dimension-based split, the number of features selected for each dimension can be determined in one of two ways: Uniform and Prior. When the split is Uniform, the same number of features is selected for each dimension. When the split is Prior, the number of features to be selected in each dimension is determined in proportion to the importance of each dimension.”)
Weng, Kalluri, Bowers, Larsen, Rafferty and Martin are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Weng with teachings of Kalluri, Bowers, Larsen, Rafferty and Martin to enable better performance of machine learning models by systematically identifying the most informative inputs without the need to reprocess the entire feature space at each step. (Weng, Abstract).
Claim(s) 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty, Martin and in further view of Chickering et al., Pub. No. US20190130308A1.
Regarding claim 11, Kalluri in view of Bowers, Larsen, Rafferty and Martin teach the method of claim 9.
Kalluri in view of Bowers, Larsen, Rafferty and Martin do not teach:
wherein the plurality of training instances is comprised of flow representations of different lengths.
Chickering teaches:
wherein the plurality of training instances is comprised of flow representations of different lengths.
(Chickering, “[0007] In the case of frequent-length encoding for a set of sequences of a specified leaf-node label, the sub-concept state machine may represent all sequences of the set by a single non-cyclic directed chain of states connected by token-consuming transitions (corresponding to labeling the respective token with the leaf-node label, and equal in number to the maximum tracked length or a smaller maximum length selected based on the statistical distribution), with epsilon transitions connecting various states in the chain directly to an end state of the sub-concept state machine. In frequent-sequence encoding, the sub-concept state machine for the set of child-label sequences of a given non-leaf-node label may represent the most frequent child-label sequence(s) by separate respective non-cyclic directed chains of composite states, and provide a parallel alternative (or “default”) (sub-)path through the sub-concept state machine for all other possible child-label sequences. Whatever the structure of the sub-concept state machine, the statistical distribution of the respective set of label sequences may be reflected in different weight functions (e.g., differing in the values of the adjustable parameters) assigned to the transitions along different sub-paths corresponding to the various label sequences. For instance, in frequent-length encoding, the epsilon transitions from states in the chain to the end state may be grouped based on the lengths [wherein the plurality of training instances is comprised of flow representations of different lengths] of the resulting sub-paths through the state machine to distinguish between a group of high-frequency label sequences and a group of lower-frequency label sequences [flow representations], with different weight functions being assigned to the different respective groups. In frequent-sequence encoding, transitions to or from the chain(s) representing the frequent child-label sequence(s) may be weighted differently than transitions to the default path.”)
Chickering, Kalluri, Bowers, Larsen, Rafferty and Martin are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Chickering with teachings of Kalluri, Bowers, Larsen, Rafferty and Martin to enable a model to accurately extract complex, multi-level labels from data by leveraging observed pattern in training data. (Chickering, Abstract).
Regarding claim 12, Kalluri in view of Bowers, Larsen, Rafferty, Martin and Chickering teach the method of claim 11.
Chickering further teaches: wherein at least one flow representation of the flow representations of different lengths is comprised of flow steps selected according to a statistical distribution of flow steps.
(Chickering, “[0007] In the case of frequent-length encoding for a set of sequences of a specified leaf-node label, the sub-concept state machine may represent all sequences of the set by a single non-cyclic directed chain of states connected by token-consuming transitions (corresponding to labeling the respective token with the leaf-node label, and equal in number to the maximum tracked length or a smaller maximum length selected based on the statistical distribution), with epsilon transitions connecting various states in the chain directly to an end state of the sub-concept state machine. In frequent-sequence encoding, the sub-concept state machine for the set of child-label sequences of a given non-leaf-node label may represent the most frequent child-label sequence(s) by separate respective non-cyclic directed chains of composite states, and provide a parallel alternative (or “default”) (sub-)path through the sub-concept state machine for all other possible child-label sequences. Whatever the structure of the sub-concept state machine, the statistical distribution [is comprised of flow steps selected according to a statistical distribution of flow steps] of the respective set of label sequences [flow representations] may be reflected in different weight functions (e.g., differing in the values of the adjustable parameters) assigned to the transitions along different sub-paths corresponding to the various label sequences. For instance, in frequent-length encoding, the epsilon transitions from states in the chain to the end state may be grouped based on the lengths [wherein at least one flow representation of the flow representations of different lengths] of the resulting sub-paths through the state machine to distinguish between a group of high-frequency label sequences and a group of lower-frequency label sequences, with different weight functions being assigned to the different respective groups. In frequent-sequence encoding, transitions to or from the chain(s) representing the frequent child-label sequence(s) may be weighted differently than transitions to the default path.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Chickering with teachings of Kalluri, Bowers, Larsen, Rafferty and Martin for the same reasons disclosed for claim 11.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty and in further view of Khaitan et al., Pub. No. US20220092408A1, (hereafter Khaitan).
Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri in view of Bowers, Larsen and Rafferty do not teach:
further comprising providing to the machine learning model, a selection of a tensor data object from a list of tensor data objects, wherein each tensor data object of the list of tensor data objects is associated with different model weights for the machine learning model
Khaitan teaches:
further comprising providing to the machine learning model, a selection of a tensor data object from a list of tensor data objects,
(Khaitan, “[0011] In one or more of the disclosed embodiments, the distribution instruction comprises at least one of: a broadcast distribution, wherein the plurality of tensor processor clusters comprises the one or more tensor processor clusters selected to receive the set of weights [a selection of a tensor data object from a list of tensor data objects] for processing the input feature; a multicast distribution, wherein a subset of the plurality of tensor processor clusters comprises the one or more tensor processor clusters selected to receive the set of weights for processing the input feature; and a unicast distribution, wherein a singular tensor processor cluster of the plurality of tensor processor clusters comprises the one or more tensor processor clusters selected to receive the set of weights for processing the input feature.”)
wherein each tensor data object of the list of tensor data objects is associated with different model weights for the machine learning model.
(Khaitan, “[0067] FIG. 8 illustrates selected elements of an example method for distributing neural network weights using a tree DMA bus. The method may begin at step 810, where a tree direct-memory access (DMA) controller of a machine learning (ML) accelerator receives a memory address indicating a location in memory storing a set of weights associated with a machine learning model. The tree DMA controller may receive the memory address from a compiler. At step 820, the tree DMA controller may receive a distribution instruction indicating one or more tensor processor clusters of the ML accelerator selected to receive the set of weights for processing. At step 830, the tree DMA controller may retrieve the set of weights from the location in memory indicated by the memory address and at step 840, the tree DMA controller may send the set of weights in a DMA packet addressed to the one or more tensor processor clusters [wherein each tensor data object of the list of tensor data objects] according to the distribution instruction. The DMA packet may be sent to the one or more tensor processor clusters via the tree DMA bus of the ML accelerator and the tensor processor clusters may process different partitioned portions of an input feature in parallel using the set of weights [is associated with different model weights for the machine learning model].”)
Khaitan, Kalluri, Bowers, Larsen and Rafferty are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Khaitan with teachings of Kalluri, Bowers, Larsen and Rafferty to deliver model weights quickly and in a structured manner, reducing memory bottlenecks and improving inference or training throughputs for large scale machine learning workloads, (Khaitan Abstract).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty and in further view of Zamanirad et al., "Programming bots by synthesizing natural language expressions into API invocations.".
Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri in view of Bowers, Larsen and Rafferty do not teach:
wherein using the one or more processors to automatically implement the one or more additional steps to be added includes causing the one or more processors to convert a text format prediction to one or more application programming interface messages
Zamanirad teaches:
wherein using the one or more processors to automatically implement the one or more additional steps to be added includes causing the one or more processors to convert a text format prediction to one or more application programming interface messages.
(Zamanirad, page: 1 – 2, “Our vision consists of making APIs first class citizens of bot builders. We aim at synthesizing natural language expression, and at dynamically determining which API to invoke based on our understanding of the users’ intent and on the knowledge over an API knowledge graph that describes what the methods do and how they can be invoked. The vision we set forth in this paper is that of users being able to talk with assistants (as some of us do every day with Siri, Alexa or Cortana) and, with the help of a knowledge of APIs modeled via a knowledge graph and built incrementally, dynamically identify intents, APIs fulfilling the identified intent, and collect from the user the value of the required parameters for invocation. If successful, this can enable a new approach to the development of cognitive services where the “program” is built on the fly based on users’ requests and available services exposed through APIs. More specifically, we devise a technique for synthesizing natural language user expressions [causing the one or more processors to convert a text format prediction] into concrete API calls by leveraging an API Knowledge Graph (KG) to achieve this. The API KG contains information about APIs, their declarations, expressions, parameters and possible values [to one or more application programming interface messages].”)
Zamanirad, Kalluri, Bowers, Larsen and Rafferty are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Zamanirad with teachings of Kalluri, Bowers, Larsen and Rafferty to add a wide variety of user expressions and automatically invoke relevant APIs in order to enable more dynamic and context-aware task execution within a workflow management system, (Zamanirad, Abstract).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty and Zamanirad and in further view of Tremblay et al., Pub. No.: US20220343250A1.
Kalluri in view of Bowers, Larsen, Rafferty and Zamanirad teach the method of claim 15.
Kalluri in view of Bowers, Larsen, Rafferty and Zamanirad do not teach:
wherein using the one or more processors to automatically implement the one or more additional steps to be added further includes transmitting the one or more application programming interface messages to an application configured to generate computerized workflow steps.
Tremblay teaches:
wherein using the one or more processors to automatically implement the one or more additional steps to be added further includes transmitting the one or more application programming interface messages to an application configured to generate computerized workflow steps.
(Tremblay, “[0015] In example embodiments, the method may further include a workflow system for controlling, configuring, and executing the previously created workflow in a platform [implement the one or more additional steps to be added]. The workflow system may utilize an automation [wherein using the one or more processors to automatically] application programming interface (API) [further includes transmitting the one or more application programming interface messages to an application configured] to provide at least one of a “Get” functionality, a “Post” functionality, a “Put” functionality, or a “Delete” functionality with respect to the previously created workflow [to generate computerized workflow steps].”)
Tremblay, Kalluri, Bowers, Larsen, Rafferty and Zamanirad are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Tremblay with teachings of Kalluri, Bowers, Larsen, Rafferty and Zamanirad to add extensibility and event driven customization to the system by enabling users to create and integrate custom code action into workflow, (Tremblay Abstract).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Kalluri in view of Bowers, Larsen, Rafferty and in further view of Tremblay.
Kalluri in view of Bowers, Larsen and Rafferty teach the method of claim 1.
Kalluri in view of Bowers, Larsen and Rafferty do not teach:
wherein the partially specified computerized workflow includes a trigger condition and at least one action step that is configured to execute in response to a determination that the trigger condition has occurred.
Tremblay teaches:
wherein the partially specified computerized workflow includes a trigger condition and
(Tremblay, “[0393] The multi-service business platform 510 may include an event system (e.g., event system 522). The event system 522 may be configured to monitor for and record the occurrence of events. In some example embodiments, the event system 522 may be configured to maintain unified events that are tracked across several systems of the multi-service business platform 510. In some of these example embodiments, event records may track all the different types of events that may occur with respect to a particular type of object such that the event record provides a log of all instances of different types of events that occurred with respect to the object. The event system 522 may fit with several of the services in this disclosure including reporting aspects and triggering of workflows [wherein the partially specified computerized workflow includes a trigger condition] and actions as related to default and custom objects.”)
at least one action step that is configured to execute in response to a determination that the trigger condition has occurred.
(Tremblay, “[0282] In embodiments, the workflow manager 1906 performs tasks relating to the execution of workflows. As discussed, a workflow defines a set of actions to be undertaken when performing a service-related task [at least one action step that is configured to execute] in response to one or more conditions being met. In some scenarios, a workflow may be defined with respect to a pipeline stage. In these scenarios, a workflow may be triggered with respect to a ticket only when the ticket is in the respective stage. Furthermore, a workflow includes a set of conditions that trigger a workflow [in response to a determination that the trigger condition has occurred] (whether the workflow is defined with respect to a ticket pipeline or independent of a ticket pipeline). In embodiments, the determination as to whether a workflow is triggered is based on the attributes of a ticket. As discussed, the ticket management system 1604 may deploy workflow listening threads that listen for tickets that meet the conditions of a particular workflow. Upon determining that a ticket meets the conditions of a workflow (or put another way, a ticket triggers a workflow), the workflow listening thread adds the ticket to a workflow queue corresponding to the workflow listening thread.”)
Tremblay, Kalluri, Bowers, Larsen and Rafferty are related to the same field of endeavor (i.e.: workflow management). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Tremblay with teachings of Kalluri, Bowers, Larsen and Rafferty to add extensibility and event driven customization to the system by enabling users to create and integrate custom code action into workflow, (Tremblay Abstract).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Vandikas et al., Pub. No.: US20160188298A1, (2013).
Vandikast teaches predicting elements for workflow development. A current configuration of the new workflow is received, and workflow element choices for a next element to be added to the new workflow are determined a long with a respective probability of relevance associated with each of the workflow element choices.
Shazeer et al., Pub. No.: US11574131B2, (2022).
Shazeer teaches processing contextual text string with the machine-learned language model to generate one or more intermediate text strings that include one or more intermediate text tokens. The computing system can process the one or more intermediate text strings with the machine-learned language model to generate an output text string comprising one or more output text tokens.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to MATIYAS T MARU whose telephone number is (571)270-0902 or via email: matiyas.maru@uspto.gov. The examiner can normally be reached Monday - Friday (8:00am - 4:00pm) EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a
USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to
use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor,
Michelle Bechtold can be reached on (571)431-0762. The fax phone number for the organization were this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from
Patent Center. Unpublished application information in Patent Center is available to registered users.
To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit
https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and
https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional
questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like
assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA)
or 571-272-1000.
/M.T.M./ Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/ Supervisory Patent Examiner, Art Unit 2148