DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 6/25/25 has been entered.
Notice to Applicant
The following is a Non-Final Office action. In response to Examiner’s Final Rejection of 4/1/25, Applicant, on 6/25/25, amended claims. Claims 1-9 and 11-19 are pending in this application and have been rejected below.
Response to Amendment
Applicant’s amendments are acknowledged.
The previous 112b rejections are withdrawn. New 112b rejections are necessitated by the amendments.
The double patenting rejections are withdrawn in light of the amendments made to 18/165,343; 18/167,075; and 18/335,151.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5, 8, 15, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 5, 15 recite the limitation "a metadata". There is insufficient antecedent basis for this limitation in the claim, as claim 1, 11 now recites “metadata.” Examiner suggests Applicant amend the claim to clarify, or cancel it if it is now duplicative.
Claims 8, 18 recite the limitation “and the data footprint module associates the processing data with corresponding historical data”. There is insufficient antecedent basis for this limitation in the claim, as claim 1, 11 now recites “historical data.” Examiner suggests Applicant amend the claim to clarify, or cancel it if it is now duplicative.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 and 11-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without reciting significantly more.
Step One - First, pursuant to step 1 in MPEP 2106.03, the claim 1 is directed to a system comprising a processor performing operations which is a statutory category.
Step 2A, Prong One - MPEP 2106.04 - The claim 1 recites–
“A data-driven system based on a data-driven model, comprising:
a storage …, storing a data-driven …, a knowledge map … ([0026] as published states “the management knowledge map 1221 and the action logic map 1222 may respectively define the operation logic of each model based on the enterprise management knowledge. For example, for specific documents (such as purchase requisitions, purchase orders, etc.), the corresponding operation logics may be defined so that the system may automatically execute these operation logics without manual operations by the experienced personnel., and a data footprint … ([0032] as published states– “In operation S603, the data footprint module 123 may establish the status change history and the transformation relationship of the data according to the business metadata definition, the business data, and the data status (that is, associating the processing data with the corresponding historical data). In operation S604, the data footprint module 123 may provide the corresponding historical business data according to the requirement of the task engine module 1211.”; and
…
and configured to obtain a first business data from an external …, wherein the first business data is a changed data … executes the data-driven … to provide a first data status ([0028] as published states “current data status” (such as the project to be initiated) and first feature description information corresponding to the first business data to the knowledge map … ([0025] as published states “Therefore, the data-driven system 100 of the embodiment may automatically initiate tasks for the changed business data based on the enterprise management knowledge to drive the associated business logic and execute the associated business operation, so as to efficiently develop the business activities in the enterprise and enable the user (employee) to automatically obtain the to-do task based on the data change. In this way, the user (employee) may complete the task according to the system guideline and gradually achieve the business goal, thereby allowing the workload of the user (employee) to be effectively reduced while the business processing experience of the enterprise may be inherited at the same time.”,
wherein … executes the knowledge map … to analyze the first data status and the first feature description information, and returns the a first data-driven model to the data-driven …, and the data-driven … executes a task in the first data-driven model (cl. 7 – analyzes context data to generate action logic; [0020] as published - In step S320, the processor 110 may execute the knowledge map module 122 to analyze the current data status and feature description information, and return the corresponding data-driven model to the data-driven module 121. [0024] as published – determine whether the business data of the task execution result is in the completed state; [0028] update task status of project number recorded in the data footprint module 123 as completed)
wherein the data-driven … executes the task in the first data-driven model to obtain a task execution result, and the … executes the data footprint … to record the task execution result, the data footprint … associates the task execution result with corresponding historical data,
wherein the data-driven… determines whether a second business data of the task execution result is in a completed state,
when the data-driven … determines that the second business data of the task execution results is not in the completed state, the data-driven module obtains the second business data of the task execution result and associated metadata from the data footprint … according to the corresponding historical data, and the … analyzes the second business data of the task execution result through the knowledge map module to obtain a second data status and second feature description information of the second business data, and returns a second data-driven model to the data-driven module, so that the data-driven module executes a next task in the second data-driven model according to the second business data and the associated metadata,
when the data-driven … determines that the second business data of the task execution result is in the completed state, the data-driven …. outputs the task execution result to the external …, so that the external … generates notification according to the task execution result
([0025] as published states “In operation S414, the task engine module 1211 may determine whether the business data of the task execution result is in the completed state. If the business data of the task execution result is in the completed state, then the task engine module 1211 may, for example, provide the task execution result to the external interactive interface to notify the user. If the business data of the task execution result is not in the completed state, the management knowledge map 1221 of the knowledge map module 122 may analyze the next business data of the task execution result through the knowledge map module 122 to obtain the next data status and next feature description information of the next business data, so that the management knowledge map 1221 of the knowledge map module 122 may return the corresponding next data-driven model to the task engine module 1211 of the data-driven module 121”. Based on broad reasonable interpretation in light of the specification, the limitations now recited further narrow the abstract idea by stating checking when the task result is in a “completed state”; that there is a “second/next” data status, “second/next business data”, “second/next data-driven model,” and a “second/next task.” This appears to only be in [0025] as published, and appears to relate to doing the next/second task to “advance” task processing until it reaches a completed state.
As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea grouping of “certain methods of organizing human activity” (managing relationships between people. The claims are interpreted as storing data regarding a business process/operations [0020, 0025] as published], providing a status (e.g. completion in and description) as there is a “change” in business data, and the specification explains that the computer is executing some manual tasks (see “processor executes the data-driven module… corresponding to changed business data to the knowledge map”, “executes the data footprint module to record a task execution result… by executing the task” and [0025] and [0026] “for specific documents (such as purchase requisitions, purchase orders, etc.), the corresponding operation logics may be defined so that the system may automatically execute these operation logics without manual operations by the experienced personnel.”), and added limitations 8/4/25 are for characterizing “first data” as “changed data, task results associated with historical data, there is metadata (Applicant’s specification [0029] as published gives examples of “metadata” of: action number, specific business data, i.e. context data; context data description), and then the last limitation is for giving a notification to an external user a task result. The claim is directed to “certain methods of organizing human activity” and “managing personal behavior (including following rules or instructions) because it has data on a knowledge map of a series of manual tasks by users along with a “change” in business data for tasks as it proceeds to completion.
Step 2A, Prong Two - MPEP 2106.04 - This judicial exception is not integrated into a practical application. In particular, the claim 1 recites additional elements that are:
. A data-driven system based on a data-driven model, comprising:
a storage device, storing a data-driven module, a knowledge map module, and a data footprint module; and
a processor, coupled to the storage device, and configured to obtain a first business data from an external interactive device, wherein the first business data is a changed data,
wherein the processor executes the data-driven module to provide a first data status ([0028] as published states “current data status” (such as the project to be initiated)” and first feature description information corresponding to the first business data to the knowledge map module,
wherein the processor executes the knowledge map module to analyze the first data status and the first feature description information, and returns a first data-driven model to the data-driven module, and the data-driven module executes a task in the first data-driven model,
wherein the data-driven module executes the task in the first data-driven model to obtain a task execution result, and the processor executes the data footprint module to record the task execution result, the data footprint module associates the task execution result with corresponding historical data,
wherein the data-driven module determines whether a second business data of the task execution result is in a completed state,
when the data-driven module determines that the second business data of the task execution results is not in the completed state, the data-driven module obtains the second business data of the task execution result and associated metadata from the data footprint module according to the corresponding historical data, and the processor analyzes …
when the data-driven module determines that the second business data of the task execution result is in the completed state, the data-driven module outputs the task execution result to the external interactive device, so that the external interactive device generates notification according to the task execution result
The additional elements, “processor,” storage device”, “module”; external interactive device” (example in [0019] as published is “user interface mounted on a terminal device”), when viewed individually or in combination, are viewed as computer elements and “apply it [abstract idea] on a computer” at MPEP 2106.05f; and “field of use” (MPEP 2106.05h) for having more than one computer pass information back and forth. At this time, there are no/limited details on how a computer is executing the tasks that people perform.
Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim also fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. The claim is directed to an abstract idea.
Step 2B in MPEP 2106.05 - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a processor/computer, storage device, and modules to execute operations are MPEP 2106.05(f) (Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235) and “field of use” (MPEP 2106.05h). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
In addition, “obtain a first business data from an external interactive data” and giving a notification to “external interactive data” on completion is a conventional computer function – See MPEP 2106.05d(II) Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321.
The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. The claim is not patent eligible. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. In addition, the steps involving parking tasks in an “interaction sample segment datastore” and having reassigned tasks in an “evaluation task assignment datastore” are viewed as conventional functions at step 2B (See MPEP 2106.05d(II)(iv) - Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306).
Independent claim 11 is directed to a method at step 1, which is a statutory category. Claim 11 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one, 2a, prong 2, and step 2b. The limitations of “processor, module,” are viewed as “apply it [abstract idea] on a computer” at step 2a, prong 2 and step 2b. The remaining limitations are similar to claim 1 above. The claim is not patent eligible.
Dependent claims 2-4 and 12-14 recite limitations regarding an “event number”, and “item information” for the task along with “current data status” and “feature description information.” These are just viewed as data items describing the tasks at this time and are narrowing the abstract. To any extent it’s “by a computer” this is also viewed as MPEP 2106.05 “apply it on a computer.”
Dependent claims 4-7, 14-17 further have “action logic,” which paragraph 26 as published explains the “action logic” defines “operation logic” which can be for automatically executing purchase requisitions, purchase orders, etc “without manual operations by the experienced personnel.” The “metadata (claims 5, 15) just represents context of specific business data according to [0029] as published; the “context data” is explicitly claimed in claims 6, 16. These are just viewed as data items describing the tasks at this time. To any extent it’s “by a computer” this is also viewed as MPEP 2106.05 “apply it on a computer.”
Claims 8, 18 narrows the abstract idea by stating that execution of tasks is associated with corresponding historical data. To any extent it’s “by a computer” this is also viewed as MPEP 2106.05 “apply it on a computer.”
Claims 9, 19 narrows the abstract idea by reciting there is “previous” processing data obtained. To any extent it’s “by a computer” this is also viewed as MPEP 2106.05 “apply it on a computer.”
Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
For more information on 101 rejections, see MPEP 2106.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-9 and 11-19 are rejected under 35 U.S.C. 103 as being unpatentable over Rao (US 2021/0342723) and Ma (US 2020/0206920) and Takatsuka (US 2006/0085245).
Concerning claim 1, Rao discloses:
A data-driven system based on a data-driven model (Rao – see par 28, FIG. 1 – architecture 100 for automating enterprise processes; See par 32 - The discovery module 111 may view the process and trace the processes from the front end, to the fulfillment center. The discovery module 111 may generate a digital representation of the process and identify the different applications or systems that the process uses. see par 60 - The discovery platform 110 may use the event and system logs, audit trails, input and output data from the execution of the intelligent automation applications 135 to generate new updated digital representations of the business process. The business process may then be analyzed to determine if there was improvement in the efficiency of the business process, and the optimization module 133 may then adjust the model to further improve the performance of the business process), comprising:
a storage device (Rao – see par 28, FIG. 1 – architecture includes Automation platform 130; discovery platform 110; AI & ML Core Technology Platform 120, etc; See FIG. 11, par 111-114 – example machine of a computer system to perform any of the methodologies herein; includes data storage devices 1118; The processor device 1102 is configured to execute instructions 1126 for performing the operations and steps discussed herein; par 116 - The data storage device 1118 may include a machine-readable storage medium 1124 (also known as a computer-readable medium) on which is stored one or more sets of instructions or software 1126 embodying any one or more of the methodologies or functions described herein), storing a data-driven module (Rao – See par 28 - the automation of enterprise business processes may utilize an integrated approach machine learning and artificial intelligence to automate the full spectrum of business processes in an Enterprise. The entire automation stack covers several components that operate in an integrated fashion to deliver a continuously learning and intelligent automation solution to address an enterprise need. See par 29, 45-46 – The automation platform 130 may comprise an operation module 131, automation module 132 and optimization module 133. 130 may leverage the machine learning models generated by the AI & ML core technology platform 120, to deliver intelligent automation for complex processes in specific business domains like finance, enterprise security, governance risk and compliance (GRC), HR and other enterprise business activities.), a knowledge map module (Rao – see par 33, FIG. 1 – discovery platform includes discovery module 111; The discovery module 111 may access event logs, system logs, application logs, or any log of user activity during the performing a process. The log data may then be fed into a process mining stack or machine learning algorithm to identify the process flow. This mining process may be used in generating the digital representation of the process. see par 35 – Discovery platform 110 also includes Visualization module 112 to generate visualization of simulated activity), and a data footprint module [0032] as published states – “In operation S604, the data footprint module 123 may provide the corresponding historical business data according to the requirement of the task engine module 1211.” (Rao – see par 31 - Discovery module 111 may use process mining techniques applied to application and user footprints, digital exhaust and event logs to discover and create a digital representation of the business process.); and
PNG
media_image1.png
785
784
media_image1.png
Greyscale
a processor, coupled to the storage device (Rao – See par 26 – embodiments implemented by computer system that includes processor, memory, and non-transitory computer-readable medium; see par 116 - the instructions 1126 may also reside, completely or at least partially, within the main memory 1104 and/or within the processor device 1102 during execution thereof by the computer system 110), and configured to obtain a first business data from an external interactive device, wherein the first business data is a changed data ([0019] as published states “The aforementioned external interactive system may include, for example, a user interface mounted on a terminal device” – Rao discloses the limitations based on broadest reasonable interpretation in light of the specification – See par 34 - The identified process may then be visualized for display to an operator of the architecture 100. The digital representation may show the processes involved in the completion of the activity or task and can be played back with historical or custom data.; See par 72 - At multiple business location, teams 401, accountants 402, controllers 403 and others 404 may be employed for the purpose of generating and validating a financial report 406. The financial report validation process 405 may require large numbers of people performing the same or similar tasks across an enterprise. The financial report validation process 405 may take weeks to months to complete. For example, teams 401 may have to provide their financial documents to the accountants 402. The accountants 402 generate various financial reports based on the input received from the teams 401;
see also Ma – par 21 – tasks … for business transactions, where human has computing device; par 22 – event stream is recorded sequence of UI actions from a computing device; par 40 - a more complex but ultimately rote procedure such as invoice processing, asset acquisition, image processing, submitting a work or data processing request, entering data into a repository, converting data from one format to another, etc. may include a suitable set of tasks for automation, depending on the frequency and weight of performance, among other variables. The set of tasks may include all or only a portion of all tasks required to complete the overall procedure. See par 41 - , identifying tasks, and particularly break-points between one task and another (as opposed to different parts of the same task) is a non-trivial and important aspect of identifying processes for robotic
process automation, particularly as the length of an event stream and/or the
number of event streams evaluated increase. See par 185 - According to this assumption, event streams may be segmented into “application traces” and clusters may be generated on the basis of such application traces rather than attempting to identify overall traces corresponding to completion of entire task);
wherein the processor executes the data-driven module to provide a first data status ([0024] as published – determine whether the business data of the task execution result is in the completed state; [0028] update task status of project number recorded in the data footprint module 123 as completed “current data status” (such as the project to be initiated) Rao –See par 88, FIG. 8 – method for generating process models and configuring automation tools; See par 92 - At step 804, the method includes generating a digital representation of the process based on the process model. The digital representation may allow for the process to be replayed visually to give the user a better understanding of the entire process from beginning to end. This may visually highlight bottlenecks in the process.) and first feature description information corresponding to the first business data to the knowledge map module (Rao see par 47 – the automation applications deliver automation at scale to handle “rapidly changing business processes”; see par 39 - the AI & ML platform 120 includes a Natural Language Processing (NLP) module trained on domains specific to the process being automated, to perform syntax operations, such as parsing, and semantic operations (disclosing feature description information); see par 49 - The understanding of the earnings report may also incorporate the analysis of extracted language, terms, fields and values, as well as the organization of the language, terms, fields and values, recognized in the earnings report);
wherein the processor executes the knowledge map module to analyze the first data status and the first feature description information, and…, and the data-driven module executes a task in the first data-driven model (cl. 7 – analyzes context data to generate action logic; [0020] as published - In step S320, the processor 110 may execute the knowledge map module 122 to analyze the current data status and feature description information, and return the corresponding data-driven model to the data-driven module 121 (Rao –see par 56 - Multiple business process models from different domains may be used in combination when performing validation or generation processes for hybrid applications that span more than one domain. See par 79 - As described, configuring the automation tools, in some embodiments, may include training machine learning tools to label portions of the structured input data 603 with categories and elements relevant to validation of a financial report or other tasks as may be applicable in the context of the environment or domain in which the automation system 600 is deployed. see par 86 - The automation system 700 can identify domain-specific labels 710 corresponding to param#s and relevant to the query response 735. Machine learning models 715 can be trained to find and label portions of the structured input data 703 which contain underlying data corresponding to the identified domain-specific labels 710).
To any extent that Rao does not “returns a first data-driven model to the data-driven module,” Ma discloses:
wherein the processor executes the knowledge map module to analyze the current data status and the feature description information, and “returns a first data-driven model to the data-driven module,” and the data-driven module executes a task in the first data-driven model (See Ma – see par 79 – ancillary/contextual information relating to events recorded in operation 302 may include e.g. raw text; See par 152 - For instance, in one approach, change point detection may be employed, e.g. using Gaussian processes, to identify break points between tasks. In more approaches, a predefined window of length N may be used to segment event streams into individual traces. Preferably, N has a value of about 30 events, but may vary according to the type of task typically performed by a given enterprise, e.g. according to a known average task performance time and/or average number of interactions involved in performing a task. see par 314 - bootstrapping knowledge from previously-generated robots, such as interactive, automated agents as described above, enables the process of automating tasks to be intelligent and responsive to changes in the system, user behavior, etc. over time. For instance, different data may be used or required to accomplish a given task at different times or in different contexts. As these changes emerge, the recorded event streams reflect the changing trends and may be used to generate new models for RPA that more accurately reflect current circumstances).
Rao and Ma disclose:
wherein the data-driven module executes the task in the first data-driven model to obtain a task execution result, and the processor executes the data footprint module to record the task execution result, the data footprint module associates the task execution result with corresponding historical data (Rao – See par 33 – in some embodiments, discovery module 111 may be automated. The discovery module 111 may access event logs, system logs, application logs, or any log of user activity during the performing a process. The log data may then be fed into a process mining stack or machine learning algorithm to identify the process flow. see par 54 - an instance of finance module 136 may be run by a bot and configured to validate invoices. Invoices may be uploaded as images or documents, then read by the bot. The values extracted from the invoice may then be compared to threshold values or other metrics to determine approval or denial of the invoice. see par 75 - Both user-powered tagged documents 501 and API queries 502 can be used as sources of validation 503. Validation topic 504 may be extracted from the user-powered tagged documents 501, API queries 502, sources of validation 503 or supplied as separate user input. The validation process may generate large amounts of metadata 506, which may be stored along with a detailed accounting of sources accessed, date and time accessed, validation status of each element, and other details required to create and store an audit trail 507. The audit trail 507 can include a history of sources and data used to arrive at a validation assessment;
see also Ma – see par 327 - user feedback over time may be utilized to bootstrap confidence in treating future sequences of events in a similar manner. For instance, over time various series of operations may be proposed in response to traces including particular sequences of events. Where the sequence of events may be performed more efficiently using an alternative set of operations, the model building process may propose such a substitution to a human user for confirmation/negation/modification. see par 328 - For example, if a particular substitution, grouping, etc. of events is historically confirmed with substantial frequency, then this may indicate a strong preference for making similar substitutions in the future, potentially even without seeking user confirmation
wherein the data-driven module determines whether a second business data of the task execution result is in a completed state (Rao – See par 34 - The identified process may then be visualized for display to an operator of the architecture 100. The digital representation may show the processes involved in the completion of the activity or task and can be played back with historical or custom data;
See also Ma – see par 32 - For instance, by not having to wait for user input, the software robot may act immediately upon completion of each event and proceed to the next event, while a UI without any such robotic process automation must await human input in order to proceed at each, or at least many, steps of the process; see par 185 - event streams may be segmented into “application traces” and clusters may be generated on the basis of such application traces rather than attempting to identify overall traces corresponding to completion of an entire task),
when the data-driven module determines that the second business data of the task execution result is not in the completed state, the data-driven module obtains the second business data of the task execution result and associated metadata (Applicant’s specification [0029] as published gives examples for “metadata” of: action number, specific business data, i.e. context data; context data description Rao – see par 31 - Discovery module 111 may use process mining techniques applied to application and user footprints, digital exhaust and event logs to discover and create a digital representation of the business process. see par 75 - The validation process may generate large amounts of metadata 506, which may be stored along with a detailed accounting of sources accessed, date and time accessed, validation status of each element, and other details required to create and store an audit trail 507. the audit trail 507 can be queried independently or in combination with the PDF 505; see also claim 2 rejection below, e.g. - see par 32 - For example, in the domain of e-commerce, events from the time a user logs into a system, orders a product and receives a product are logged. The discovery module 111 may view the process and trace the processes from the front end, to the fulfillment center.
see also Ma – see par 22 - an “event stream” may also include contextual information associated with the user's interactions, such as an identity of the user, various data sources relied upon/used in the course of the user's interactions, content of the computing device's display, including but not limited to content of a particular window, application, UI, etc., either in raw form or processed to revel, for instance, key-value pairs, and/or other elements displayed on the screen, particularly contextual information such as the window/application/UI/UI element, etc. upon which the user is focused; …one or more “groups” with which the user is associated (e.g. a project name, a workgroup.), see par 233 - method 300 includes normalizing the recorded event streams, wherein the normalizing involves: identifying equivalent events among the recorded event streams; combining related events into a single event within a given recorded event stream; and/or identifying events having no appreciable impact on performance of the corresponding task) from the data footprint module according to the corresponding historical data, and the processor analyzes the second business data of the task execution result through the knowledge map module to obtain a second data status and second feature description information of the second business data, and returns a second data-driven model to the data-driven module, so that the data-driven module executes a next task in the second data-driven model according to the second business data and the associated metadata (Ma see par 145 - If a number of identified event subsequences meets or exceeds the minimum frequency threshold, the subsequence is extended and the search performed again. At this point, task boundaries may be defined with high confidence. see par 152 - in certain approaches segmentation per operation 306 of method 300 uses unsupervised models to delineate different tasks within event streams. For instance, in one approach, change point detection may be employed, e.g. using Gaussian processes, to identify break points between tasks. See par 185, 188 – tasks may span multiple applications… traces segments based on information available within the event stream, e.g. process name, process ID (disclosing metadata); the overall sequence of events for any given task is still represented within the application traces, albeit possibly in the form of multiple sequential traces rather than a single sequence of events. see par 329 - each event stream corresponding to a human user interacting with a computing device to perform one or more tasks; concatenating the event streams; segmenting some or all of the concatenated event streams to generate one or more individual traces (each perform one or more task) performed by the user interacting with the computing device, each trace corresponding to a particular task; identifying, from among some or all of the clustered traces, one or more candidate processes for robotic automation; prioritizing the candidate processes; and selecting at least one of the prioritized candidate processes for robotic automation (disclosing that can select a next process/model and task for automation),
Since the claims have an added layer of “completed state”, to the extent that the claim is referring to proceeding to “next task” even if “not completed,” Takatsuka is also applied:
when the data-driven module determines that the second business data of the task execution result is “not in the completed state”, the data-driven module obtains the second business data of the task execution result and associated metadata from the data footprint module according to the corresponding historical data, and the processor analyzes “the second business data of the task execution result” through the knowledge map module to obtain “a second data status” and second feature description information of the second business data, and returns a second data-driven model to the data-driven module, so that the data-driven module executes “a next task” in the second data-driven model according to the second business data and the associated metadata (Takatsuka – see par 45-46 – members participate in synchronous and asynchronous activities with ad-hoc tasks; An asynchronous task may be defined as a task that need not be completed before another task is begun, and asynchronous tasks may run in parallel with each other; business process management workflows may be used to automate very well defined and formal processes used to accomplish business objectives. See par 109 -The process designer may specify a name for the task. In the case of a workflow step that creates an ad-hoc task in a collaboration teamspace, the task name could be set to the same name as the workflow step name. see par 145 - All team collaboration artifacts, such as teamspaces, discussion threads, ad-hoc tasks, ., meetings, notes, and notifications, may be automatically declared or classified as records based on contextual information associated with an
automated business process (or workflow) or contextual information associated with the teamspace. The collaboration system provides collection and recordation of this kind of information, so that it later used during the process.)
Rao discloses showing completion of activity/task (See par 34); and that when a processing, e.g. validation of earning report, entry is popped out of the queue (See par 66) and that multiple teams/people (401-404) are providing information to each other to validate financial reports (See par 72, FIG. 4). Ma discloses completion and then proceeding to next event (See par 32), completing an entire task in an application trace (See par 185), and prioritizing processes for automation (See par 329).
Takatsuka discloses:
when the data-driven module determines that the second business data of the task execution result is in the completed state, the data-driven module outputs the task execution result to the external interactive device, so that the external interactive device generates notification according to the task execution result (Takatsuka – see par 87 – implement workflow processes that capture a business process and automate the process; see par 109 – task assigner may receive email when task completed; see par 156 – user interface has interface components, display screen, software to process information, and communication links to other computing systems; see par 165 - One type of process can be to issue notifications. The collaboration management system may be configured to allow internal and/or external users (or organizations) to subscribe to teamspaces for the purpose of being notified whenever certain characteristics of the teamspace change or when certain processes are initiated or completed.).
Rao, Ma, and Takatsuka are analogous art as they are directed to automating business tasks and having business process models/mining (see Rao Abstract, Ma - Abstract, par 34; Takatsuka par 46). 1) Rao discloses having training with labels categories and elements relevant to a financial report or other tasks for the automation system (See par 79) and finding portions of input containing underlying data corresponding to identified domain-specific labels (See par 86). Ma improves upon Rao by disclosing looking at contextual information recorded (par 79), identifying break points between tasks, as well as types of tasks (See par 152), and generating new models for RPA based on different data for different tasks in different contexts (See par 314). One of ordinary skill in the art would be motivated to further include different contexts and different types of tasks and different RPA (Robotic Process Automation) models to efficiently improve upon the domain-specific labels in Rao. 2) Rao discloses showing completion of activity/task (See par 34); and that when a processing, e.g. validation of earning report, entry is popped out of the queue (See par 66) and that multiple teams/people (401-404) are providing information to each other to validate financial reports (See par 72, FIG. 4). Ma discloses completion and then proceeding to next event (See par 32), completing an entire task in an application trace (See par 185), and prioritizing processes for automation (See par 329). Takatsuka improves upon Rao and Ma by disclosing automating tasks and considering having asynchronous/ad-hoc tasks that need not be completed before another task is begun that run in parallel (See par 45) and further providing notifications to internal or external users when teamspaces change or processes are completed (See par 156, 165). One of ordinary skill in the art would be motivated to further include branching tasks in workflow automation and notifications on processes completing to efficiently improve upon the domain-specific labels in Rao and the prioritizing of processes for automation in Ma.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the domain-specific labels that are part of automating business process models in Rao to further have different types of tasks, contexts, and models for RPA as disclosed in Ma, and to further include tasks running in parallel and notifications of completion of processes as disclosed in Takatsuka, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success.
Concerning independent claim 11, Rao, Ma, and Takatsuka disclose:
A data-driven method based on a data-driven model (Rao – same as claim 1 -see par 28, FIG. 1 – architecture 100 for automating enterprise processes; See par 32 - The discovery module 111 may view the process and trace the processes from the front end, to the fulfillment center. The discovery module 111 may generate a digital representation of the process and identify the different applications or systems that the process uses. see par 60 - The discovery platform 110 may use the event and system logs, audit trails, input and output data from the execution of the intelligent automation applications 135 to generate new updated digital representations of the business process. The business process may then be analyzed to determine if there was improvement in the efficiency of the business process, and the optimization module 133 may then adjust the model to further improve the performance of the business process).
The remaining limitations are similar to claim 1 above. It would be obvious to combine Rao and Ma for the same reasons as claim 1.
Concerning claims 2 and 12, Rao and Ma disclose:
The data-driven system according to claim 1, wherein the data-driven module comprises a task engine module (Rao – see par 46 - . The intelligent automation applications 135 may incorporate the machine learning and AI components created by the automation platform 130 to handle automation of complicated enterprise processes. To handle the automation of enterprise processes, modules are configured to automate domain-specific activities and tasks. The modules are built and trained to intelligently automate activities and tasks in the domains of finance, security, HR and GRC. The finance module 136, security module 137 and HR module 138 may all be trained separately and on data sets that are specific to the domain being automated.
See also Ma – See par 70 – processor has module(s) implemented in hardware and/or software to perform steps in FIG. 3, 300; par 40 - a more complex but ultimately rote procedure such as invoice processing, asset acquisition, image processing, submitting a work or data processing request, entering data into a repository, converting data from one format to another, etc. may include a suitable set of tasks for automation, depending on the frequency and weight of performance, among other variables that will be appreciated by those having ordinary skill in the art upon reading the present disclosure. The set of tasks may include all or only a portion of all tasks required to complete the overall procedure.).
the data-driven module provides a corresponding event number to the knowledge map module according to the first business data (Rao – see par 31 - Discovery module 111 may use process mining techniques applied to application and user footprints, digital exhaust and event logs to discover and create a digital representation of the business process. Visualization module 112 may use the data generated by the discovery module 111 to create a visualization of the current business process as it happens in an enterprise. This may then be leveraged to check for conformance, optimization and driving transformation for a more efficient process; see par 32 - For example, in the domain of e-commerce, events from the time a user logs into a system, orders a product and receives a product are logged. The discovery module 111 may view the process and trace the processes from the front end, to the fulfillment center.
see also Ma – see par 81-82 - Where events comprise multiple actions, preferably the actions are identifiable as forming a single event, e.g. by an event identifier field included in the table. entries tied together by an event ID; See par 83 – actions represent “subtraces,” i.e. a sequence of events recorded in operation 302 (FIG. 3); The subtraces may exist at the application level, but can also be implemented at the element level, e.g. Dialog X or Tab Y within the application/UI; See par 98 - This single k