DETAILED ACTION
This non-final office action is in response to Applicant’s submission filed September 24, 2024. Currently Claims 1-20 are pending. Claims 1, 10 and 18 are the independent claims.
The instant application is a continuation of Application No. 17528150 now U.S. Patent No. 12099820. Application No. 17528150 is a continuation in part of Application No. 16702966 now U.S. Patent No. 11200539.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-13 and 15-20 are rejected on the ground of nonstatutory double patenting over claims 1-4, 6-8, 10-12, 14, 16 and 27 of U.S. Patent No. 11200539 (application no. 16774077) since the claims, if allowed, would improperly extend the “right to exclude” already granted in the patent.
The subject matter claimed in the instant application is fully disclosed in the patent and is covered by the patent since the patent and the application are claiming common subject matter, as follows:
The claims are rejected over independent claims 1-4, 6-8, 10-12, 14, 16 and 27 of U.S. Patent No. 11200539, wherein it would be obvious to one skilled in the art to omit, from the independent claims, one or more method step(s) -see table below. Applicant appears to be attempting to broaden the scope of the parent application/patent and capture scope which was forgone during prosecution of the parent application.
The table below maps the conflicting claims between the instant application and U.S. Patent No. 11200539.
Instant Application
USPN 11200539
1, 10, 18
- monitoring activities when respective developers are creating RPA workflows
- cause the captured activities/workflows to be stored in a database over the communication network
- call the one or more ML models over the communication network
- train the one or more ML models using the captured sequence of activities/RPA workflows
- analyze the current RPA workflow as a current developer adds/modifies activities
- detect that one or more added/modified activities within GUI are indicative of a next sequence of activities
1, 16, 27
2
7
3
3
4
1, 16, 27
5
6
6
10
7
11
8
12
9
14
11
1, 16, 27
12
2
13
3
15
4
16
10
17
12
19
8
20
1, 16, 27
Furthermore, there is no apparent reason why applicant was prevented from presenting claims corresponding to those of the instant application during prosecution of the application which matured into a patent. See In re Schneller, 397 F.2d 350, 158 USPQ 210 (CCPA 1968). See also MPEP § 804.
Claims 1-20 are rejected on the ground of nonstatutory double patenting over claims 1, 5-7, 12-22, 24, 26, 32 and 33 of U.S. Patent No. 12099820 (application no. 17528150) since the claims, if allowed, would improperly extend the “right to exclude” already granted in the patent.
The subject matter claimed in the instant application is fully disclosed in the patent and is covered by the patent since the patent and the application are claiming common subject matter, as follows:
The claims are rejected over independent claims 1, 5-7, 12-22, 24, 26, 32 and 33 of U.S. Patent No. 12099820, wherein it would be obvious to one skilled in the art to omit, from the independent claims, one or more method step(s) – see table below. Applicant appears to be attempting to broaden the scope of the parent application/patent and capture scope which was forgone during prosecution of the parent application.
The table below maps the conflicting claims between the instant application and U.S. Patent No. 12099820.
Instant Application
USPN 12099820
1, 10, 18
- monitoring the sequence of activities
- retrain the one more AI/ML models based on the suggested next sequence of activities
1, 17, 27
2
5
3
6
4
7
5
12
6
13
7
14
8
15
9
16
11
18
12
19
13
20
14
21
15
22
16
24
17
26
19
32
20
33
Furthermore, there is no apparent reason why applicant was prevented from presenting claims corresponding to those of the instant application during prosecution of the application which matured into a patent. See In re Schneller, 397 F.2d 350, 158 USPQ 210 (CCPA 1968). See also MPEP § 804.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Regarding independent Claims 1, 10 and 18, the claims are directed to the abstract idea of workflow creation. This is a process (i.e. a series of steps) which (Statutory Category – Yes –process).
The claims recite a judicial exception, a method for organizing human activity, workflow creation (Judicial Exception – Yes – organizing human activity). Specifically, the claims are directed to displaying one or more suggested next sequence of activities via an electronic display as part of a robotic process automation tool/application, wherein workflow creation is a fundamental economic practice that falls into the abstract idea subcategories of sales activities and/or commercial interactions. See 2106.04(a). Further all of the steps of “receive”, “provide”, “execute”, “run”, “receive”, “transmitting” and “display” recite functions of the workflow creation are also directed to an abstract idea that falls into the abstract idea subcategories of sales activities and/or commercial interactions. The intended purpose of independent Claims 1, 10 and 18 appears to be to display to a human (software developer) one or more suggested next sequence of activities in a workflow as part of a robotic process automation designer application (RPA, workflow, integrated development environment, tool; Figure 10).
Accordingly, the claims recite an abstract idea – fundamental economic practice, specifically in the abstract idea subcategories of sales activities and/or commercial interactions. The exceptions are the generic computer elements: memory, processor, computing system, designer application (software per se), communication network, computer readable medium storing computer programs, electronic display. See 2106.04(a).
Accordingly, the claims recite an abstract idea under Step 2A, Prong One, we proceed to Step 2A, Prong Two. Considering whether the additional elements set forth in the claim integrate the abstract idea into a practical application (See 2106.04(a)), the previously identified non-abstract elements directed to generic computing components include: memory, processor, computing system, designer application (software per se), communication network, computer readable medium storing computer programs, electronic display. These generic computing components are merely used to receive/access, process or display data as described extensively in Applicant’s specification (Specification: Figure 5). Generic computers performing generic computer functions, alone, do not amount to significantly more than the abstract idea. Moreover, when viewed as a whole with such additional elements considered as an ordered combination, the claim modified by adding a generic computer would be nothing more than a purely conventional computerized implementation of applicant's workflow creation in the general field of business process automation and would not provide significantly more than the judicial exception itself. Note McRo, Inc. v. Bandai Namco Games America Inc. (837 F.3d 1299 (Fed. Cir. 2016)), guides: "[t]he abstract idea exception prevents patenting a result where 'it matters not by what process or machinery the result is accomplished."' 837 F.3d at 1312 (quoting O'Reilly v. Morse, 56 U.S. 62, 113 (1854)) (emphasis added). The claims are not directed to a particular machine nor do they recite a particular transformation (MPEP § 2106.05(b)).
Additionally, the claims do not recite any specific claim limitations that would provide a meaningful limitation beyond generally linking the use of the judicial exception to a particular technological environment. Nor do the claims present any other issues as set forth in the MPEP 2106.04(a) regarding a determination of whether the additional generic elements integrate the judicial exception into a practical application. See Revised Guidance, 84 Fed. Reg. at 55. Rather, the claims on merely use instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea. Thus, under Step 2A, Prong Two (MPEP §§ 2106.05(a)-(c) and (e)- (h)), claims 1-20 do not integrate the judicial exception into a practical application.
Regarding the use of the generic (known, conventional) recited memory, processor, computing system, designer application (software per se), communication network, computer readable medium storing computer programs, electronic display," the Supreme Court has held "the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 573 U.S. 208, 223. Generic computers performing generic computer functions, alone, do not amount to significantly more than the abstract idea. The claims as a whole do not recite more than what was well-known, routine and conventional in the field (see MPEP § 2106.05(d)). In light of the foregoing and under the MPEP 2106.04(a), that each of the claims, considered as a whole, is directed to a patent-ineligible abstract idea that is not integrated into a practical application and does not include an inventive concept.
Regarding the recited one or more artificial intelligence/machine learning models, the examiner notes that the one or more ML/AI models are trained external/outside of the scope of the invention as claimed. Further the artificial intelligence/machine learning models are recited at a high level of generality and amounts to no more than mere instructions to apply the abstract idea using a generic one or more artificial intelligence/machine learning models on a generic computer, also recited at a high level of generality. The one or more artificial intelligence/machine learning models are used to generally apply the abstract idea without limiting how the one or more artificial intelligence/machine learning models’ function. artificial intelligence/machine learning models are described at a high level such that it amounts to using a generic computer with a generic one or more AI/ML models to apply the abstract idea. These limitations only recite outcomes/results of the steps without any details about how the outcomes are accomplished.
Accordingly, the claims are not patent eligible under 35 U.S.C. 101.
Additionally, the claims recite a judicial exception, a mental processes, which can be performed in the human mind or via pen and paper (Judicial Exception – Yes – mental process).
The claimed steps of execute the one or more AI/ML models all describe the abstract idea. These limitations as drafted are directed to a process that under its reasonable interpretation covers performance of the steps in the mind but for the recitation of the generic computer components. Other than the recitation of a memory, processor, computing system, designer application (software per se), communication network, computer readable medium storing computer programs, electronic display nothing in the claimed steps precludes the step from practically being performed in the mind. The claims do not recite additional elements that are sufficient to amount to significantly more than the abstract idea because the steps receiving a captured sequence of activities, receive one or more suggest next sequences of activities are directed to insignificant pre-solution activity (i.e. data gathering). The steps of provide the captured sequence of activities to one or more AI/ML models as input and display the one or more suggested next sequences of activities are directed to insignificant post-solution activity (i.e. data output). The mere nominal recitation of a generic processor/computer does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process. (Judicial Exception recited – Yes – mental process).
The claims do not integrate the abstract idea into a practical application. The generic memory, processor, computing system, designer application (software per se), communication network, computer readable medium storing computer programs, electronic display are each recited at a high level of generality merely performs generic computer functions of receiving, providing, processing transmitting or displaying data. The generic processor/computer merely applies the abstract idea using generic computer components. The elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not recite improvements to the functioning of a computer or any other technology field (MPEP 2106.05(a)), the claims do not apply or use the abstract idea to effect a particular treatment or prophylaxis for a disease or medical condition, the claims to do apply the abstract idea with a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (e.g. data remains data even after processing; MPEP 2106.05(c)), the claims no not apply or use the abstract idea in some other meaningful way beyond generally linking the user of the abstract idea to a particular technological environment (i.e. a generic computer) such that the claim as a whole is more than a drafting effort designed to monopolize the abstract idea (MPEP 2106.05(e)). The recited generic computing elements are no more than mere instructions to apply the exception using a generic computer component.
Regarding the recited one or more artificial intelligence/machine learning models, the examiner notes that the one or more ML/AL models are trained external/outside of the scope of the invention as claimed. Further the artificial intelligence/machine learning models are recited at a high level of generality and amounts to no more than mere instructions to apply the abstract idea using a generic one or more artificial intelligence/machine learning models on a generic computer, also recited at a high level of generality. The one or more artificial intelligence/machine learning models are used to generally apply the abstract idea without limiting how the one or more artificial intelligence/machine learning models’ function. artificial intelligence/machine learning models are described at a high level such that it amounts to using a generic computer with a generic one or more AI/ML models to apply the abstract idea. These limitations only recite outcomes/results of the steps without any details about how the outcomes are accomplished. The recitation one or more artificial intelligence/machine learning models in this claim does not negate the mental nature of these limitations because the trained neural network is merely used at a tool to perform an otherwise mental process.
Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Integrated into a Practical Application – No).
As discussed above the additional elements in the claims amount to no more than a mere instruction to apply the abstract idea using generic computing components, wherein mere instructions to apply an judicial exception using generic computer components cannot integrate a judicial exception into a practical application or provide an inventive concept. For the retrieving and displaying steps that were considered extra-solution activity, this has been re-evaluated and determined to be well-understood, routine, conventional activity in the field. Applicant’s specification does not provide any indication that the computer/processor is anything other than a generic, off-the-shelf computer component, and the Symantec, TLI, and OIP Techs. court decisions (MPEP 2106.05(d)(II)) indicate that mere collection or receipt of data is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). For these reasons, there is no inventive concept. The claim is ineligible (Provide Inventive Concept – No).
The claims are ineligible under 35 U.S.C. 101 as being directed to an abstract idea without significantly more.
Regarding dependent claims 2-9, 11-17, 19 and 20, the claims are directed to the abstract idea of workflow creation and merely further limit the abstract idea claimed in independent Claims 1, 10 and 18.
Claim 2 further limits the abstract idea by retraining the one or more trained AI/ML models after a period of time has passed OR predetermined amount of data has been collected OR after a predetermined number of users OR after a predetermined percentage of users OR any combination thereof (a more detailed abstract idea remains an abstract idea). Claims 3 and 19 further limit the abstract idea wherein the trained AI/ML model learns user-specific style OR logic OR conventions OR any combination thereof as a user develops RPA workflows over time (a more detailed abstract idea remains an abstract idea). Claims 4, 11 and 20 further limits the abstract idea by detecting that one or more added/modified activities are indicative of a next sequence of activities and producing a sequence of next sequence of activities ad a suggestion confidence threshold as output (a more detailed abstract idea remains an abstract idea). Claim 5 further limits the abstract idea by limiting the suggestion confidence threshold to probabilistic thresholds based on learned confidence scores (a more detailed abstract idea remains an abstract idea). Claim 6 further limits the abstract idea by limiting the one or more AI/ML models to global and local models (a more detailed abstract idea remains an abstract idea). Claims 7 and 16 further limit the abstract idea by limiting the AI/ML global/local models to having different suggestion confidence thresholds (a more detailed abstract idea remains an abstract idea). Claims 8 and 17 further limit the abstract idea by when a trained AI/ML model does not provide a suggest the application is configured to call second/third AI/ML models to provide suggestion (a more detailed abstract idea remains an abstract idea). Claim 9 limits the abstract idea by training the AI/ML models using attended OR unattended user feedback OR both (a more detailed abstract idea remains an abstract idea). Claim 12 further limits the abstract idea wherein the user provides confirmation in the application that a suggested sequence is correct wherein the sequence is automatically added to the workflow (a more detailed abstract idea remains an abstract idea). Claim 13 further limits the abstract idea by limiting the automatic addition to setting declarations and usage variables OR setting properties OR reading/writing to files OR any combination thereof (a more detailed abstract idea remains an abstract idea). Claim 14 further limits the abstract idea by when the confidence threshold is higher that the suggestion confidence threshold automatically adding the suggest next sequence of steps (a more detailed abstract idea remains an abstract idea). Claim 15 further limits the abstract idea storing incorrect next sequence of activities in a database as a negative example (a more detailed abstract idea remains an abstract idea).
None of the limitations considered as an ordered combination provide eligibility because taken as a whole the claims simply instruct the practitioner to apply the abstract idea to a generic computer.
Further regarding claims 1-20, Applicant’s specification discloses that the claimed elements directed to a memory, processor, computing system, designer application (software per se), communication network, computer readable medium storing computer programs, electronic display at best merely comprise generic computer hardware which is commercially available (Specification: Figure 5). More specifically Applicant’s claimed features directed to a system do not represent custom or specific computer hardware circuits, instead the terms merely refers to commercially available software and/or hardware. Thus, as to the system recited, "the system claims are no different from the method claims in substance. The method claims recite the abstract idea implemented on a generic computer; the system claims recite a handful of generic computer components configured to implement the same idea." See Alice Corp. Pry. Ltd., 134 S.Ct. at 2360.
Accordingly, the claims merely recite manipulating data utilizing generic computer hardware (e.g. memory, processor, etc.). Generic computers performing generic computer functions, alone, do not amount to significantly more than the abstract idea. Further the lack of detail of the claimed embodiment in Applicant’s disclosure is an indication that the claims are directed to an abstract idea and not a specific improvement to a machine.
Accordingly given the broadest reasonable interpretation and in light of the specification the claims are interpreted to include the process steps being performed by a human mind or via pen and paper. The claim limitations which recite a computer implemented method is at best recite generic, well-known hardware. However, the recited generic hardware simply performs generic computer function of displaying or processing data. Generic computers performing generic, well known computer functions, alone, do not amount to significantly more than the abstract idea. Further the recited memories are part of every conventional general-purpose computer.
Applicant has not demonstrated that a special purpose machine/computer is required to carry out the claimed invention. A special purpose machine is now evaluated as part of the significantly more analysis established by the Alice decision and current 35 U.S.C. 101 guidelines. It involves/requires more than a machine only broadly applying the abstract idea and/or performing conventional functions.
Applicant’s specification discloses that the claimed elements directed to a memory, processor, computing system, designer application (software per se), communication network, computer readable medium storing computer programs, electronic displays merely comprise generic computer hardware which is commercially available (Specification: Figures 13, 14). More specifically Applicant’s claimed features directed to a system and components do not represent custom or specific computer hardware circuits, instead the term system merely refers to commercially available software and/or hardware. Thus, as to the system recited, "the system claims are no different from the method claims in substance. The method claims recite the abstract idea implemented on a generic computer; the system claims recite a handful of generic computer components configured to implement the same idea." See Alice Corp. Pry. Ltd., 134 S.Ct. at 2360.
Accordingly, the claims are not patent eligible under 35 U.S.C. 101.
Examiner suggests Applicant review the recently posted 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence (2024 AI SME Update) in the Federal Register on July 17, 2024 (https://www.federalregister.gov/public-inspection/2024-15377/guidance-2024-update-on-patent-subject-matter-eligibility-including-on-artificial-intelligence ) and specifically review the three new examples 47-49 announced by the 2024 AI SME Update which provide exemplary SME analyses under 35 U.S.C. 101 of hypothetical claims related to AI inventions (https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf).
Additionally, examiner suggests Applicant review the recently updated MPEP § 2106.04(d)(1), provided below for Applicant’s convenience.
In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement in the functioning of a computer, or an improvement to other technology or a technical field. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but only in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine that the claim improves technology or a technical field. Second, if the specification sets forth an improvement in technology or a technical field, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement, i.e., That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 10, 11, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ramaurthy et al., U.S. Patent Publication No. 2019/0324781 as applied to the claims above and further in view of Vandikas et al., U.S. Patent No. 10013238.
Initially it is noted that the following phrases have been given their broadest reasonable interpretation in light of the specification, specifically the following phrases have been interpreted for the purposes of examination as follows:
RPA workflow – any workflow, series of tasks/activities/work; RPA recites non-functional intended use of the workflow
RPA designer application – any application/software that enables a human user to design, build, create, modify, edit or otherwise develop a workflow; RPA recites non-functional intended use of the application
Machine Learning (ML)/Machine Learning Models; Artificial Intelligence (AI) - any and all algorithms, methods, or techniques which utilize data in order to make predictions or decisions based on that data
Training ML/AI - any/all acts or activities of providing data, feedback or the like to a ML/ML model(s) which generally results in the ability of the ML/ML model to make predictions or decisions based on that data
Further it is noted that the claims recitation of Robotic Process Automation (RPA), specifically as it relates to workflows, designer applications and the like merely recites non-functional descriptive material intended use of the workflow and the ‘designer application’, and are not functionally involved in the steps recited nor do they alter the recited structural elements. The recited method steps would be performed the same regardless of the specific intended use of the workflow/designer application. Further, the structural elements remain the same regardless of the specific intended use of the workflow. Thus, this descriptive material will not distinguish the claimed invention from the prior art in terms of patentability, see In re Gulack, 703 F.2d 1381, 1385, 217 USPQ 401, 404 (Fed. Cir. 1983); In re Lowry, 32 F.3d 1579, 32 USPQ2d 1031 (Fed. Cir. 1994); MPEP 2106.
Regarding Claim 1, Ramaurthy et al. discloses a system and method comprising:
Memory storing computer program instructions and at least one processor communicably coupled to the memory wherein the computer program instructions are configured to case the at least one processor (Figures 1, 5) to:
Receive a captured sequence of activities from a robotic process automation designer application (– “robotic script generation” – Figure 1; Abstract; Figure 5) of a developer system over a communication network (“user interactions”, “process steps”, Paragraph 23; Paragraphs 22, 26-28 – see below; Paragraph 49, see below, emphasis added; Figure 9A) in a RPA workflow (robotic process automation – “robotic script generation” – Figure 1; Abstract) comprising one or more activities that have been added OR modified in the workflow by a user (Paragraph 26 – “during operation”, see below, emphasis added; Paragraph 45 – “runtime”; Paragraph 51, “real time”, see below, emphasis added; Paragraph 52, “real time”, see below; Paragraph 22, see below, emphasis added; Paragraph 30, see below, emphasis added; Paragraph 26, see above, emphasis added; Paragraph 32, see below, emphasis added; Paragraph 49, see below, emphasis added; Paragraph 52, see below, emphasis added; Claim 4);
[0026] During operation, the receiving module 110 may receive captured process steps from the plurality of devices 124 A-N via network 122. Herein, the captured process steps may correspond to various sequences of GUI interactions carried out for performing the activity. The captured process steps may be used in training a first ANN and variations of process steps are determined by a processing module 112. The received process steps may be fed to the first ANN in the form of XML files and/or hash codes. The first ANN receives the input via an input layer and generates an output via an output layer in the form of variations of process steps. The number of hidden layers between the input and output layers may vary according to the complexity of the activity that is to be performed. Accordingly, the input may include captured process steps and the output, generated by the processing module 112 by training the first ANN, may include variations of process steps.
[0051] FIG. 8 is an example flow diagram 800 of a process for predicting a next step to be performed in an activity. The technique is implemented with the help of LSTM neural network model. A user operating a software application may get stuck with completing a certain activity. The user may find it difficult to identify action to be performed after completing a series of process steps. For example, the user may be unable to find the "save" button after creating a document. It would be helpful if a software bot could monitor the actions of the user and identify what activity the user is performing in real time. Further, it will be advantageous if the bot could help the user complete the activity if the user gets stuck at any point while performing the activity.
[0022] The Robotic Script Generation System may capture user interactions (e.g., process steps) in a Graphical User Interface (GUI) based application. The captured process steps may relate to user actions performed in the GUI to execute an activity in an application. The captured process steps may then be used in training a first Artificial Neural Network (ANN) to determine variations of process steps for performing the activity. Based on the determined variations, a set of process steps may then be determined. Further, based on the determined set of process steps, robotic scripts may be generated for performing the activity. Furthermore, the robotic scripts, upon execution, may automatically execute the set of process steps to perform the activity in the software application.
[0028] In one example, an activity A may be performed by a user by following sequence of process steps 1, 2, 3, 4, 5. Another user may be performing the same activity by following sequence of process steps 1, 3, 4, 6, 8. These actions are received as captured process steps by the receiving module 110. There may be several variations of process steps that could be followed for performing activity A. The processing module 112 may determine several variations of process steps that may be followed for performing the activity. The processing module 112 uses the first ANN for determining the variations of process steps. In one example, process step variations V1 to Vn are determined by the processing module 112. In one example, V1 may include process steps 1, 2, 3, 4, 5; V2 may include process steps 1, 3, 5, 6; V3 may include process steps 1, 4, 5, and so on. Activity A may be performed by executing any one of the determined variations of process steps.
[0049] At 602, captured process steps related to an activity performed in an application may be received. In one example, the captured process steps may be received from plurality of devices. The process steps may correspond to a sequence of GUI carried out for performing the activity. At 604, variations of the process steps in performing the activity may be determined by training a first ANN using the captured process steps. The first ANN may be an LSTM neural network. At 606, a set of process steps for performing the activity may be determined based on the variations of process steps. The set of process steps may correspond to a set of the determined variations of process steps. At 608, a robotic script may be generated for performing the activity using the determined set of process steps.
Provide the captured sequence activities to one or more artificial intelligence/machine learning models as input, the one or more AI/ML models trained using previously captured sequence of activities (e. g. train ML models utilizing captured process steps; ML models including at least “Artificial Neural Network”, “ANN”; Paragraph 22 – see above, emphasis added; Paragraph 26 – see above, emphasis added; ANN, RNN, LSTM – Paragraph 33, see above, emphasis added; Paragraphs 37, 41/Figures 2A-2C– feed captured process steps to training ML model; Paragraph 49; Paragraph 51, see above; Claims 5, 6) and provide suggestions of next sequence of activities as output (Paragraph 22, sentence 4, see above; Paragraph 26, last sentence, see above; Paragraph 29, see above, emphasis added; Paragraph 40, see above, emphasis added; Paragraph 44, last sentence; Figure 8; Paragraph 51, see above);
Execute the one or more trained AI/ML models (Paragraph 22, 37, 41, 49, 51; Claims 5, 6); and
Receive one or more suggested next sequences of activities as output from the one or more AI/ML models (Paragraph 22, sentence 4, see above; Paragraph 26, last sentence, see above; Paragraph 29, see below, emphasis added; Paragraph 40, see below, emphasis added; Paragraph 44, last sentence; Figure 8; Paragraph 51, see above; Paragraph 51, see below, emphasis added; Paragraph 53; Figure 8);
[0029] Further, optimizing module 114 determines a set of process steps for performing the activity based on the determined variations of the process steps. The set of process steps may correspond to a set of process variations for performing the activity. Optimizing module 114 may determine the set of process steps in such a way that the determined set of process steps could perform the activity in an optimal manner using minimum amount of resources. The determined set of process steps may substantially reduce processor and the memory usage for performing the activity. Amongst V1 to Vn determined by the processing module 112, the set of process variation which performs the activity most efficiently may be selected by the optimizing module 114.
[0040] Once trained, the LSTM model may identify variations of process steps using regression technique. Further, process variations may be linked to activities by the LSTM model using classification techniques. In both scenarios, the model may be trained by providing a sequence of steps via XML files or hash codes at the input layer. In regression post training, given a sequence of steps the neural network may predict the likely next steps to be performed for performing the activity. In classification, the neural network determines the category to which the steps belong to thereby connecting the process steps to the activity.
[0051] FIG. 8 is an example flow diagram 800 of a process for predicting a next step to be performed in an activity. The technique is implemented with the help of LSTM neural network model. A user operating a software application may get stuck with completing a certain activity. The user may find it difficult to identify action to be performed after completing a series of process steps. For example, the user may be unable to find the "save" button after creating a document. It would be helpful if a software bot could monitor the actions of the user and identify what activity the user is performing in real time. Further, it will be advantageous if the bot could help the user complete the activity if the user gets stuck at any point while performing the activity.
[0053] For illustration, let us consider that the target business application for FIG. 8 is a spread sheet application called "Letter Pad". Every user interaction with the GUI is captured as captured steps at 804. For example, a user, while creating a text-based document accidentally deletes a whole paragraph. The user may want to undo his action but may not know how to perform this function. Here, using the techniques provided in 800, the system may assist the user in finding the "undo button" in Letter Pad. Steps 812 to 818 are iterative and real-time in nature and are very crucial in performing next step prediction. Since the process steps are uploaded batch wise, the first batch might not have all necessary information for predicting the next step. The LSTM model comes into picture here, the previous batch is stored in the LSTM network's short-term memory. In one example, four batches of step ids may be required for identifying an activity. The LSTM keeps the step ids in its memory until all the batches are received and the activity is identified. Once the activity is identified and the next step prediction is completed, the LSTM may forget the step ids stored in its memory.
Transmitting the one or more suggested next sequence of activities to the developer computer system wherein the developer computing system is configured to display the one or more next sequence of activities via an electronic display (Paragraphs 51, 53; Figures 8, 12, 13).
Ramaurthy et al. does not disclose a suggestion confidence threshold as claimed.
Vandikas et al., from the same field of endeavor of workflows, discloses a system comprising:
A ‘developer’ computing system comprising a designer application (e.g. workflow development environment; Figure 1, Element 104; Figure 9; Column 1, Lines 25-3-50; Column 11, Lines 42-47; Figure 4);
Receive a captured sequence of activities from a robotic process automation designer application (Column 8, Lines 4-60; Column 10, Lines 35-68; Figure 1, Element 110; Figure 2, Elements 202, 204; Column 8, Lines 4-60; Column 10, Lines 35-68);
Receive one or more suggested next sequences of activities as (auto-completion, workflow element recommendation; Column 8, Lines 1-3, 32-68; Column 9, Lines 1-20; Column 10, Lines 1-34; Column 15, Lines 4-40; Column 16, Lines 1-7; Figure 2, Element 208; Figure 3, Element 114; Figure 4, 11) after ‘developers’ add or modify one or more activities in the workflow (Column 10, Lines 35-41;
Responsive to one or more suggested next sequence of activities and one or more respective confidence scores exceeding a threshold, transmitting/displaying/outputting the suggested next step of activities and confidence score (Column 15, Lines 5-11).
It would have been obvious to one skilled in the art that the system and method as disclosed by Ramaurthy et al. would have benefited from utilizing a confidence/probability/likelihood value/threshold to determine which next sequence of activities to suggest in view of the disclosure of Vandikas et al., the result system/method providing a ranking/ordered listing of suggest next workflow element options/suggestions (auto-complete) in order of relevance, similarity of the like (Vandikas et al.: Column 2, Lines 41-45; Column 10, Lines 50-55; Column 15, Lines 42-68; Column 16, Lines 1-6).
Regarding Claims 4, 11 and 20, Ramaurthy et al. discloses a system wherein the trained AI/ML models detect that one or more added OR modified activities within the designer application are indicative of a next sequence of activities as the developer/user adds OR modifies the activities in the workflow, the detection based on running parameters of the workflow through the one or more trained AI/ML models (during operation, runtime, real time, etc.; Paragraphs 22, 26, 32, 31, 49, 52), and producing a sequence of next steps (Paragraph 22, 40, 50-53; Figure 8).
Ramaurthy et al. does not disclose a threshold as claimed.
Vandikas et al., from the same field of endeavor of workflows, discloses a system comprising send one or more suggested next sequence of activities to the application that meet or exceed a suggestion confidence threshold (Column 15, Lines 5-11).
Regarding Claim 5, Ramaurthy et al. does not disclose a threshold as claimed.
Vandikas et al., from the same field of endeavor of workflow design, discloses a system and method a suggestion confidence threshold is a probabilistic threshold based on a confidence score (Figure 7, Element 704; Figures 11-13; Column 13, Lines 7-34; Column 15, Lines 4-52; Columns 16, 17).
Allowable Subject Matter
Claims 1-20 would be allowable over the prior art if rewritten to overcome the pending rejections under 35 U.S.C. 101.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Liu et al., U.S. Patent Publication No. 20140310053 discloses a system and method comprising determining and providing to a developer (business process modeler) suggest next sequence of activities in a business process/workflow.
Marcu et al., U.S. Patent Publication No. 20170109676, discloses a system and method for providing auto completion suggestions including sequences of next activities/tasks to developers within a workflow designer application.
Rao et al., U.S. Patent Publication No. 20160062745 discloses a developer designer application (integrate development environment) comprising providing auto completion suggestions to a developer and receiving confirmation as to whether the suggestions are correct/incorrect.
Maheshwari et al., U.S. Patent Publication No. 20190317803 discloses a system and method for suggesting a predicted next sequence of activities/tasks in a workflow utilizing trained AI/ML model(s).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT L JARRETT whose telephone number is (571)272-7033. The examiner can normally be reached M-TH 6am-4:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached at (571) 272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SCOTT L. JARRETT
Primary Examiner
Art Unit 3625
/SCOTT L JARRETT/Primary Examiner, Art Unit 3625