DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on August 25, 2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-14 and 16-24 are rejected under 35 U.S.C. 103 as being unpatentable over Gardner (US Publication No. 2020/0351091) and Pearl et al. (US Publication No. 2017/0048285) in view of Pinczel et al. (US Publication No. 2023/0396673) and further in view of Bent et al. (US Publication No. 2017/0078169).
As to claim 1, Gardner teaches a method comprising:
receiving, at a server [distributed computing system 254/workflow scheduler], a workflow definition (see e.g., [0069] for the workflow scheduler 262 accepting workflows from at least some of the plurality of user devices, [0070] for the user device 252 submitting a workflow to the workflow scheduler 262, [0109] for at step 504, the user device 252 submitting a workflow request to the workflow scheduler 262, the workflow request including information identifying the user, and the information identifying the user being assumed to be a user ID in FIG. 8. The workflow scheduler receives, from a user device, a workflow definition as part of a workflow request.);
generating, at the server, a unique key [flow ID] for the received workflow definition (see e.g., [0109] for at step 506, the workflow scheduler 262 generating a flow ID, which is a unique ID that identifies the workflow. The workflow scheduler generates a flow ID for the received workflow definition.);
converting, at the server, the received workflow definition to an internal workflow schema [series of task IDs] and setting states [validity] of workflow steps [tasks] of the internal workflow schema as states [valid] using the generated key (see e.g., [0109] for at step 508, the workflow scheduler 262 transmitting a request for a workflow token to the server 270, the workflow token being specific to the workflow and including both: (i) the flow ID so that the workflow token may be tied to the specific workflow (via the flow ID), and (ii) the user ID so that the workflow token may be tied to the specific user associated with the workflow (via the user ID), at step 510, the server 270 using its private key 276 to digitally sign information that includes at least the received user ID and the flow ID, and then the server 270 generating the workflow token, and at step 512, the workflow token being transmitted from the server 270 to the workflow scheduler 262, [0110] for the workflow beginning, and as part of the workflow a particular task needing to be executed by the distributed computing system 254, therefore, at step 514 the workflow scheduler 262 transmitting a request for an action token to the server 270, the request for the action token including the workflow token, at step 516, the received workflow token being verified by the server 270 by verifying the digital signature in the received workflow token, assuming verification is successful, then at step 518 the server 270 generating an action token, the action token incorporating the information from the received workflow token, e.g. the user ID and flow ID, and the action token being digitally signed, [0111] for at step 520, the action token being transmitted from the server 270 to the workflow scheduler 262, at step 522, the action token and task being transmitted from the workflow scheduler 262 to the resource manager 258, the method of FIG. 7 then being performed, except that the action token is used in place of the user ID in FIG. 7, [0085] for at step 404, the resource manager 258 transmitting the user ID and task to the computing node 256, at step 406, the computing node 256 generating a task ID that identifies the task, at step 408, the computing node 256 transmitting the task ID to the resource manager 258, at step 410, the resource manager 258 storing an indication in its memory that the task ID is associated with a valid task, and the task being considered valid because it was scheduled by the resource manager 258 and the task has not completed execution, and [0099] for at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager converts the received workflow definition to a series of task IDs and the tasks are set as valid using the flow ID. The workflow scheduler transmits the action token, which includes the flow ID, to the resource manager in order to set the tasks as valid.);
storing, at a distributed log storage [resource manager memory] communicatively coupled to the server, the internal workflow schema having the states to a state topic [validity indication] of the distributed log storage using the generated unique key, wherein the state topic includes the states [valid/invalid indications] of the internal workflow schema (see e.g., [0080] for the resource manager 258 including an associated memory 324 (e.g. to store applications, tasks, data, etc.) and the resource manager 258 further includes a first network interface 326 for communicating with the workflow scheduler 262 over a network and a second network interface 328 for communicating with the plurality of computing nodes, including computing node 256, over a network, [0083] for the server 270 being able to communicate with: the resource manager 258, [0111] for at step 520, the action token being transmitted from the server 270 to the workflow scheduler 262, at step 522, the action token and task being transmitted from the workflow scheduler 262 to the resource manager 258, the method of FIG. 7 then being performed, except that the action token is used in place of the user ID in FIG. 7, [0085] for at step 404, the resource manager 258 transmitting the user ID and task to the computing node 256, at step 406, the computing node 256 generating a task ID that identifies the task, at step 408, the computing node 256 transmitting the task ID to the resource manager 258, at step 410, the resource manager 258 storing an indication in its memory that the task ID is associated with a valid task, and the task being considered valid because it was scheduled by the resource manager 258 and the task has not completed execution, and [0099] for at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager memory is communicatively coupled to the workflow scheduler, server, and nodes. The series of task IDs having valid states are stored, at the resource manager memory, to an indication of validity using the action token, which includes the flow ID. The indication of validity includes valid and invalid indications of the series of task IDs.);
receiving, at the server, a message [response] that includes a state based on one more steps of the internal workflow schema (see e.g., [0088] for at step 426, the resource manager 258 receiving the task ID and checking whether the task ID is valid and at step 428, the resource manager 258 transmitting a response to the server 270 indicating that the task is valid, i.e. the indicating that the task identified by the task ID is still being executed by the computing node 256. The server receives a response that includes a valid indication based on a task ID of the series of task IDs.);
performing, at the server with one or more workers [data storage system], at least one operation [data access] based on the received message (see e.g., [0096] for at step 432, a data access token being issued and transmitted to the computing node 256, the data access token being generated by the server 270, in which case the data access token may incorporate the user ID and/or task ID, the data access token further incorporating a digital signature that is generated by the server 270 using the private key 276, the digital signature being generated by the server 270 using the user ID and/or task ID, and the digital signature being verified by the data storage system 260 before allowing the data access to occur, [0097] for a data access token being only issued and sent to the computing node 256 at step 432 if at step 428 the resource manager 258 indicated that the task associated with the task ID was valid, i.e. that the task identified by the task ID was still being executed by the computing node 256, and [0098] for at step 434, the computing node 256 using the data access token received from the server 270 in order to access the data in the data storage system 260 and for example, the computing node 256 transmitting, to the data storage system 260, a request to access data, the request including the data access token, and the data storage system 260 confirming the validity of the data access token and facilitating the data access, and execution of the task by the computing node 256 continuing, and including multiple data access requests (e.g. by repeating steps 418 to 434), but eventually execution of the task finishing at step 436. The computing node transmits, to the data storage system, a request to access data based on the response including a valid indication.);
updating, at the distributed log storage, the state based on the performed at least one operation (see e.g., [0098] for upon task completion, at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing, for example, the indication being in the form of a message indicating that the computing resources associated with the task are again free and ready for another task to be scheduled, the indication including the task ID, and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager memory updates the indication of validity based on completion of the performed data access.); and
internal workflow schema for the generated key (see e.g., [0109] for at step 508, the workflow scheduler 262 transmitting a request for a workflow token to the server 270, the workflow token being specific to the workflow and including both: (i) the flow ID so that the workflow token may be tied to the specific workflow (via the flow ID), and (ii) the user ID so that the workflow token may be tied to the specific user associated with the workflow (via the user ID), at step 510, the server 270 using its private key 276 to digitally sign information that includes at least the received user ID and the flow ID, and then the server 270 generating the workflow token, and at step 512, the workflow token being transmitted from the server 270 to the workflow scheduler 262, [0110] for the workflow beginning, and as part of the workflow a particular task needing to be executed by the distributed computing system 254, therefore, at step 514 the workflow scheduler 262 transmitting a request for an action token to the server 270, the request for the action token including the workflow token, at step 516, the received workflow token being verified by the server 270 by verifying the digital signature in the received workflow token, assuming verification is successful, then at step 518 the server 270 generating an action token, the action token incorporating the information from the received workflow token, e.g. the user ID and flow ID, and the action token being digitally signed, [0111] for at step 520, the action token being transmitted from the server 270 to the workflow scheduler 262, at step 522, the action token and task being transmitted from the workflow scheduler 262 to the resource manager 258, the method of FIG. 7 then being performed, except that the action token is used in place of the user ID in FIG. 7, [0085] for at step 404, the resource manager 258 transmitting the user ID and task to the computing node 256, at step 406, the computing node 256 generating a task ID that identifies the task, at step 408, the computing node 256 transmitting the task ID to the resource manager 258, at step 410, the resource manager 258 storing an indication in its memory that the task ID is associated with a valid task, and the task being considered valid because it was scheduled by the resource manager 258 and the task has not completed execution, and [0099] for at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager converts the received workflow definition to a series of task IDs and the tasks are set as valid using the flow ID. The workflow scheduler transmits the action token, which includes the flow ID, to the resource manager in order to set the tasks as valid.).
Gardner does not specifically disclose the states being not-started states. However, Pearl teaches
the states being not-started [pending] states (see e.g., [0178] for a work item being associated with a then current pending state (e.g., state=pend)).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner to include the states being not-started states, as taught by Pearl, for the benefit of distinguishing between not-started and in-progress states for steps that have not completed (see e.g., Pearl, [0178]).
Gardner in view of Pearl does not specifically disclose compacting, at the distributed log storage, the state topic of the internal workflow schema based on the updated state, wherein the compacting reduces the states of the internal workflow schema to the current states, without intermediary states. However, Pinczel teaches
compacting, at the distributed log storage [distributed log], the state topic [field] of the internal workflow schema [log entries] based on the updated state [value], wherein the compacting reduces the states of the internal workflow schema to the current states, without intermediary states (see e.g., [0004] for using a replicated log (also known as a ‘distributed log’), [0005] for FIG. 1 showing an exemplary log 12, having three log entries 14, 16, 18 for a key-value database, each log entry 14, 16, 18 being labelled with a respective log index I, the state of the database after each log entry 14 having been applied to the database being shown above the log entries 14, thus, the first log entry 14 (with log index ‘1’) indicating that a value “x” was added to field A in the database (note that a field in a key-value database is also known as a “key”), after the first log entry 14, the database having a state 20 in which there is a value “x” in field A, the second log entry 16 (with log index ‘2’) indicating that a value “y” was added to data field B in the database, after the second log entry 16, the database having a state 22 in which there is a value “x” in field A and a value “y” in field B, the third log entry 18 (with log index ‘3’) adding a value “z” to data field A in the database, as data field A already has a value “x”, the value “z” overwriting the value “x” in data field A, and after the third log entry 18, the database having a state 24 in which there is a value “z” in field A and value “y” in field B, and [0015] for FIG. 2 showing the effect of a log cleaning approach on the example of FIG. 1, here, as the effect of the first log entry 14 has been overridden by a later log entry (i.e. the log entry with log index ‘3’ replaces the value of data field A that was set by the first log entry 14), it being fine if a newly-joined server only receives the second and third log entries 16, 18, as the server would still arrive at the correct state (A=“z” and B=“y”), thus, the first log entry 14 being removed from the log. At the distributed log, each field of the log entries is compacted based on the updated value. The compacting reduces the values of the log entries to the current values, without intermediate values.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner in view of Pearl to compact, at the distributed log storage, the state topic of the internal workflow schema based on the updated state, wherein the compacting reduces the states of the internal workflow schema to the current states, without intermediary states, as taught by Pinczel, for the benefit of maintaining consistency (or identifying inconsistencies) in replicated log-based consensus protocols that use log compaction (see e.g., Pinczel, [0023]).
Gardner and Pearl in view of Pinczel does not specifically disclose storing the at least one operation performed by the one or more workers and at least one state of the internal workflow schema based on the at least one operation performed as a log topic separately from the distributed log storage, wherein the log topic is a partition of related data to which the one or more workers store the at least one state of the internal workflow schema being performed; and wherein the stored operations of the log topic are separately stored from the distributed log storage without compaction. However, Bent teaches
storing the at least one operation [event] performed by the one or more workers [app] and at least one state [time] of the internal workflow schema [sequence] based on the at least one operation performed as a log topic [complete log] separately from the distributed log storage [event log storage 124], wherein the log topic is a partition of related data [re-constructed, complete, discrete events on storage device 210] to which the one or more workers store the at least one state of the internal workflow schema being performed (see e.g., [0025] for the actions or interactions of a user running the app on the user device 128 being recorded as events, [0030] for a set of event logs E collected for viewed content set C, [0031] for compaction being primarily performed on already generated event logs in order to allow for further event generation, [0032] for meta data information for a content item in this embodiment including but not being limited to, the length of the item, e.g., in terms of time, [0034] for the set of event logs E collected for viewed content set C being retrieved from one or more storage devices 124, 126 that stores the event logs and content with metadata, [0037] for with event log E, an Event Summarizer 118 in one embodiment applying one of the following functions to progressively compact the event data and this process iterating until the requisite compaction is achieved, [0038] for Drop (e): this function dropping an event e, an event being dropped only when domain rules allow it to be re-created later on the server side, and for example, a fast forward (ffwd) or rewind (rwd) event always returning to previous state; thus, play-ffwd-play-pause can be compacted to play-ffwd-pause, and a domain rule can later insert a play prior to the pause, [0042] for the compacted segments being stored in storage 124, e.g., a storage device that stores event logs, [0047] for generating as output a re-constructed, complete, discrete event logs for content X and user A, where the re-constructed sequence represents the most likely time-stamped sequence of events, as generated when user A interacted with the content X and the reconstructed complete, discrete event logs linked with context X for user A being stored in a storage device 210, and [0049] for the component 208 inserting additional events (that might have been dropped at the client side leveraging domain rules) at appropriate positions in the sequence to obtain the full re-constructed sequence, in one embodiment, this insertion of events being governed through the same state transitions modeled for the dropping of events as described above, for example, if a play event has been dropped after a rewind event after a learner has viewed a learning content this being reinserted on the server side as the server reestablishes the consistency of event sequences, and in this case, in one embodiment, the consistency of event sequences being maintained through a modeled state transition (state machine), which determines that a rewind event must be followed by a play event if the content item has not been stopped after the rewind event. An event performed by the app and a time, based on the performed event, of the sequence is stored as a complete log. The complete log storage is separate from the event log storage. The complete log is storage device 210 storing re-constructed, complete, discrete events to which the app stores the times of the sequence being performed.); and
wherein the stored operations of the log topic are separately stored from the distributed log storage without compaction (see e.g., [0042] for the compacted segments being stored in storage 124, e.g., a storage device that stores event logs, [0047] for generating as output a re-constructed, complete, discrete event logs for content X and user A, where the re-constructed sequence represents the most likely time-stamped sequence of events, as generated when user A interacted with the content X and the reconstructed complete, discrete event logs linked with context X for user A being stored in a storage device 210, and [0049] for the component 208 inserting additional events (that might have been dropped at the client side leveraging domain rules) at appropriate positions in the sequence to obtain the full re-constructed sequence, in one embodiment, this insertion of events being governed through the same state transitions modeled for the dropping of events as described above, for example, if a play event has been dropped after a rewind event after a learner has viewed a learning content this being reinserted on the server side as the server reestablishes the consistency of event sequences, and in this case, in one embodiment, the consistency of event sequences being maintained through a modeled state transition (state machine), which determines that a rewind event must be followed by a play event if the content item has not been stopped after the rewind event. The stored events of the complete log are stored in storage device 210, which is separate from event log storage 124. The stored events of the complete log are uncompacted.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner and Pearl in view of Pinczel to store the at least one operation performed by the one or more workers and at least one state of the internal workflow schema based on the at least one operation performed as a log topic separately from the distributed log storage, wherein the log topic is a partition of related data to which the one or more workers store the at least one state of the internal workflow schema being performed; and wherein the stored operations of the log topic are separately stored from the distributed log storage without compaction, as taught by Bent, for the benefit of maintaining valuable data and insights while complying with resource constraints (see e.g., Bent, [0019]).
As to claim 2, the limitations of parent claim 1 have been discussed above. Gardner teaches
validating, at the server, the received workflow definition (see e.g., [0109] for at step 506, the workflow scheduler 262 generating a flow ID, which is a unique ID that identifies the workflow, at step 508, the workflow scheduler 262 transmitting a request for a workflow token to the server 270, the workflow token being specific to the workflow and including both: (i) the flow ID so that the workflow token may be tied to the specific workflow (via the flow ID), and (ii) the user ID so that the workflow token may be tied to the specific user associated with the workflow (via the user ID), at step 510, the server 270 using its private key 276 to digitally sign information that includes at least the received user ID and the flow ID, and then the server 270 generating the workflow token, and at step 512, the workflow token being transmitted from the server 270 to the workflow scheduler 262 and [0110] for the workflow beginning, and as part of the workflow a particular task needing to be executed by the distributed computing system 254, therefore, at step 514 the workflow scheduler 262 transmitting a request for an action token to the server 270, the request for the action token including the workflow token, at step 516, the received workflow token being verified by the server 270 by verifying the digital signature in the received workflow token, assuming verification is successful, then at step 518 the server 270 generating an action token, the action token incorporating the information from the received workflow token, e.g. the user ID and flow ID, and the action token being digitally signed. The server validates the received workflow definition through verification of its digital signature.).
As to claim 3, the limitations of parent claim 1 have been discussed above. Gardner teaches
determining, at the server, a run number [data access hash value] for the internal workflow schema from the state topic to determine if the internal workflow schema for the key was completed (see e.g., [0086] for the digital signature being generated by hashing a data block to generate a hashed value, and then encrypting the hashed value with the server 270's private key 276 to generate the digital signature, [0088] for at step 426, the resource manager 258 receiving the task ID and checking whether the task ID is valid and at step 428, the resource manager 258 transmitting a response to the server 270 indicating that the task is valid, i.e. the indicating that the task identified by the task ID is still being executed by the computing node 256, [0096] for at step 432, a data access token being issued and transmitted to the computing node 256, the data access token being generated by the server 270, in which case the data access token may incorporate the user ID and/or task ID, the data access token further incorporating a digital signature that is generated by the server 270 using the private key 276, the digital signature being generated by the server 270 using the user ID and/or task ID, and the digital signature being verified by the data storage system 260 before allowing the data access to occur, [0097] for a data access token being only issued and sent to the computing node 256 at step 432 if at step 428 the resource manager 258 indicated that the task associated with the task ID was valid, i.e. that the task identified by the task ID was still being executed by the computing node 256, and [0098] for at step 434, the computing node 256 using the data access token received from the server 270 in order to access the data in the data storage system 260 and for example, the computing node 256 transmitting, to the data storage system 260, a request to access data, the request including the data access token, and the data storage system 260 confirming the validity of the data access token and facilitating the data access, and execution of the task by the computing node 256 continuing, and including multiple data access requests (e.g. by repeating steps 418 to 434), but eventually execution of the task finishing at step 436, upon task completion, at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing, for example, the indication being in the form of a message indicating that the computing resources associated with the task are again free and ready for another task to be scheduled, the indication including the task ID, and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. Based on the task ID being valid, the server determines a hash value in order to generate a data access token. Based on the computing node using the data access token to access data in the storage system, the resource manager determines if the task IDs for the flow ID was completed).
As to claim 4, the limitations of parent claims 1 and 3 have been discussed above. Gardner teaches
receiving, at the server, a request to perform the internal workflow schema based on the key and the determined run number (see e.g., [0086] for the digital signature being generated by hashing a data block to generate a hashed value, and then encrypting the hashed value with the server 270's private key 276 to generate the digital signature, [0087] for the digital signature verification method performed being implementation specific and depending upon the digital signature algorithm implemented and in one example implementation, step 420 including the server 270: for (i) decrypting the digital signature received in the token to obtain a first value; (ii) computing a hash of the data block received in the token, in order to obtain a second value, and then (iii) confirming that the first value matches the second value, and [0096] for at step 432, a data access token being issued and transmitted to the computing node 256, the data access token being generated by the server 270, in which case the data access token may incorporate the user ID and/or task ID, the data access token further incorporating a digital signature that is generated by the server 270 using the private key 276, the digital signature being generated by the server 270 using the user ID and/or task ID, and the digital signature being verified by the data storage system 260 before allowing the data access to occur. The node receives a request to perform task IDs, which include the flow ID, based on the data access hash value.).
As to claim 5, the limitations of parent claims 1 and 3 have been discussed above. Gardner teaches
internal workflow schema for the key (see e.g., [0109] for at step 508, the workflow scheduler 262 transmitting a request for a workflow token to the server 270, the workflow token being specific to the workflow and including both: (i) the flow ID so that the workflow token may be tied to the specific workflow (via the flow ID), and (ii) the user ID so that the workflow token may be tied to the specific user associated with the workflow (via the user ID), at step 510, the server 270 using its private key 276 to digitally sign information that includes at least the received user ID and the flow ID, and then the server 270 generating the workflow token, and at step 512, the workflow token being transmitted from the server 270 to the workflow scheduler 262, [0110] for the workflow beginning, and as part of the workflow a particular task needing to be executed by the distributed computing system 254, therefore, at step 514 the workflow scheduler 262 transmitting a request for an action token to the server 270, the request for the action token including the workflow token, at step 516, the received workflow token being verified by the server 270 by verifying the digital signature in the received workflow token, assuming verification is successful, then at step 518 the server 270 generating an action token, the action token incorporating the information from the received workflow token, e.g. the user ID and flow ID, and the action token being digitally signed, [0111] for at step 520, the action token being transmitted from the server 270 to the workflow scheduler 262, at step 522, the action token and task being transmitted from the workflow scheduler 262 to the resource manager 258, the method of FIG. 7 then being performed, except that the action token is used in place of the user ID in FIG. 7, [0085] for at step 404, the resource manager 258 transmitting the user ID and task to the computing node 256, at step 406, the computing node 256 generating a task ID that identifies the task, at step 408, the computing node 256 transmitting the task ID to the resource manager 258, at step 410, the resource manager 258 storing an indication in its memory that the task ID is associated with a valid task, and the task being considered valid because it was scheduled by the resource manager 258 and the task has not completed execution, and [0099] for at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager converts the received workflow definition to a series of task IDs and the tasks are set as valid using the flow ID. The workflow scheduler transmits the action token, which includes the flow ID, to the resource manager in order to set the tasks as valid.).
Gardner in view of Pearl does not specifically disclose wherein the compacting reduces the states of the internal workflow schema to the current states of a run, without intermediary states of the internal workflow schema in the distributed log storage. However, Pinczel teaches
wherein the compacting reduces the states of the internal workflow schema to the current states of a run [database update], without intermediary states of the internal workflow schema in the distributed log storage (see e.g., [0004] for using a replicated log (also known as a ‘distributed log’), a log being a series of log entries, each describing a specific change to the state, and for example if the state is a complete database, and a log entry describing a single update to one data item in the database, [0005] for FIG. 1 showing an exemplary log 12, having three log entries 14, 16, 18 for a key-value database, each log entry 14, 16, 18 being labelled with a respective log index I, the state of the database after each log entry 14 having been applied to the database being shown above the log entries 14, thus, the first log entry 14 (with log index ‘1’) indicating that a value “x” was added to field A in the database (note that a field in a key-value database is also known as a “key”), after the first log entry 14, the database having a state 20 in which there is a value “x” in field A, the second log entry 16 (with log index ‘2’) indicating that a value “y” was added to data field B in the database, after the second log entry 16, the database having a state 22 in which there is a value “x” in field A and a value “y” in field B, the third log entry 18 (with log index ‘3’) adding a value “z” to data field A in the database, as data field A already has a value “x”, the value “z” overwriting the value “x” in data field A, and after the third log entry 18, the database having a state 24 in which there is a value “z” in field A and value “y” in field B, and [0015] for FIG. 2 showing the effect of a log cleaning approach on the example of FIG. 1, here, as the effect of the first log entry 14 has been overridden by a later log entry (i.e. the log entry with log index ‘3’ replaces the value of data field A that was set by the first log entry 14), it being fine if a newly-joined server only receives the second and third log entries 16, 18, as the server would still arrive at the correct state (A=“z” and B=“y”), thus, the first log entry 14 being removed from the log. The compacting reduces the values of the log entries to the current values of the database update, without intermediary values of the log entries in the distributed log.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner in view of Pearl wherein the compacting reduces the states of the internal workflow schema to the current states of a run, without intermediary states of the internal workflow schema in the distributed log storage, as taught by Pinczel, for the benefit of maintaining consistency (or identifying inconsistencies) in replicated log-based consensus protocols that use log compaction (see e.g., Pinczel, [0023]).
As to claim 6, the limitations of parent claim 1 have been discussed above. Gardner teaches
wherein the internal workflow schema includes at least one workflow step selected from the group consisting of: a hold sequential workflow step, a parallel workflow step, and a nested workflow step (see e.g., [0073] for when a computing node 256 has available resources, a task being assigned by the resource manager 258 and transmitted to the computing node 256 for execution and in some embodiments, related tasks being sent for execution to other computing nodes, e.g., for parallel execution. The series of task IDs include parallel tasks.).
As to claim 7, the limitations of parent claim 1 have been discussed above. Gardner teaches wherein the receiving the message further comprises:
determining that the state of the received message is at least one selected from the group consisting of: not-started, in-progress [valid], error, and completed (see e.g., [0088] for at step 426, the resource manager 258 receiving the task ID and checking whether the task ID is valid and at step 428, the resource manager 258 transmitting a response to the server 270 indicating that the task is valid, i.e. the indicating that the task identified by the task ID is still being executed by the computing node 256. The server receives a response that includes a valid indication based on a task ID of the series of task IDs. The valid indication corresponds to in-progress.).
As to claim 8, the limitations of parent claims 1 and 7 have been discussed above. Gardner does not specifically teach wherein when the state of the received message is not-started, a worker of the server performs the at least one operation based on the received message. However, Pearl teaches
wherein when the state of the received message [workflow event] is not-started, a worker [workflow engine] of the server performs the at least one operation [sign/publish] based on the received message (see e.g., [0179] for the admin role provisioned to user u3 in workflow view 16194 indicating that user u3 can take action to complete the active work item w7, specifically, as an example, user u3 inviting user u5 from enterprise D to participate as an approver for the workflow (at workflow event 16522), responsive to the workflow event 16522, a workflow view 16195 being presented with associated instances of metadata (e.g., workflow participant metadata 16322 and work item metadata 16342), as shown, the workflow participant metadata 16322 comprising a metadata update 16361 identifying user u5 as the approver at work item w8, the work item metadata 1634.sub.2 also comprising a metadata update 16362 indicating a state change for work item w7 from active to done, and a state change for work item w8 from pend to active, and the work item w8 remaining in the active state until the task (e.g., sign) associated with work item w8 is executed, [0180] for FIG. 16C2 depicting a state transition of the workflow view 16195, the workflow participant metadata 16322, and the work item metadata 16342 earlier described in FIG. 16C1, as shown, the state transition being responsive to the user u5 from enterprise D signing the document f3 (at workflow event 16523), responsive to the workflow event 16523, a workflow view 16196 being presented with associated instances of metadata (e.g., workflow participant metadata 16323 and work item metadata 16343), as shown, the workflow participant metadata 16323 comprising a metadata update 16363 updating the role of user u4 to approver and updating the role of user u5 to reviewer, the work item metadata 16343 also comprising a metadata update 16364 indicating a state change for work item w8 from active to done and a state change for work item w9 from pend to active, and the work item w9 remaining in the active state until the task (e.g., publish) associated with work item w9 is executed, [0187] for the workflow engine 1662 generating the workflow responses 1654 based at least in part on detected instances of the workflow events 1652, a given workflow response precipitating one or more instances of content updates 1638 to the shared content 1622, and for example, a signature provided by an approver being applied in a designated location of a legal agreement without having to download, print, sign, scan, and upload the agreement, and [0188] for an event mapping 1647 in the workflow rules 1626 mapping a received workflow event to one or more operations (e.g., update metadata, update content, send workflow alert, etc.) executed at the workflow engine 1662, a logic mapping 1646 in the workflow rules 1626 and/or a mapping operation of functions within the rule API 1665 map[omg a trigger description in the work item attributes 1645 to a set of logic to be executed at the workflow engine 1662, and for example, a trigger=parent.done work item attribute mapping to a set of conditional logic (e.g., if parent.state==“done” THEN set.metadata.child.state=“active”) comprising inputs extracted from other metadata (e.g., parent attributes, state attributes, etc.). When the state of the sign workflow event is pending and the state of the select task is done, the pending state may be updated to an active state and workflow engine signs based on the sign workflow event. When the state of the publish workflow event is pending and the state of the sign task is done, the pending state may be updated to an active state and workflow engine publishes based on the publish workflow event.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner wherein when the state of the received message is not-started, a worker of the server performs the at least one operation based on the received message, as taught by Pearl, for the benefit of distinguishing between not-started and in-progress states for steps that have not completed (see e.g., Pearl, [0178]).
As to claim 9, the limitations of parent claims 1 and 7 have been discussed above. Gardner does not specifically teach wherein when the state of the received message is in-progress, a worker of the server skips the message. However, Pearl teaches
wherein when the state of the received message is in-progress [active], a worker of the server skips the message (see e.g., [0187] for the workflow engine 1662 generating the workflow responses 1654 based at least in part on detected instances of the workflow events 1652 and a workflow event corresponding to a “Forward” button click generating a workflow response comprising an update to metadata describing a work item state (e.g., from state=active to state=done) and a workflow alert sent to the user device of the owner of the next work item in the workflow. When the state of the Forward workflow event is active, the active state may be updated to a done state and the workflow engine skips execution of the Forward workflow event).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner wherein when the state of the received message is in-progress, a worker of the server skips the message, as taught by Pearl, for the benefit of distinguishing between not-started and in-progress states for steps that have not completed (see e.g., Pearl, [0178]).
As to claim 10, the limitations of parent claims 1 and 7 have been discussed above. Gardner does not specifically disclose wherein when the state of the received message is an error state, a worker of the server retries performing the operation from which the error occurred. However, Pearl teaches
wherein when the state [status] of the received message [job request] is an error state [failed], a worker [retry engine] of the server [job manager] retries performing the operation [action] from which the error occurred (see e.g., [0087] for sending job requests to the jobs manager, [0094] for a metadata attribute of a contract template including a status attribute that causes a particular action or job to be performed and in this manner, metadata or changes to metadata triggering job requests (e.g., events or actions), [0104] for the job manager including a retry engine, [0107] for one service in each cluster retrying jobs, and [0114] for the status engine 825 ensuring that jobs are executed and in one embodiment, jobs and status updates (started, completed, failed) being persisted in a local database (e.g., the local HBase cluster). When the status of the job request is failed, a retry engine of the job manager may retry performing the action from which the failure occurred.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner wherein when the state of the received message is an error state, a worker of the server retries performing the operation from which the error occurred, as taught by Pearl, for the benefit of ensuring job execution (see e.g., Pearl, [0103]).
As to claim 11, the limitations of parent claims 1, 7, and 10 have been discussed above. Gardner does not specifically disclose changing, at the at the distributed log storage, the state to an in-progress state when the worker retries performing the operation. However, Pearl teaches
changing, at the at the distributed log storage [local HBase cluster], the state to an in-progress state [started] when the worker retries performing the operation (see e.g., [0114] for the status engine 825 ensuring that jobs are executed and in one embodiment, jobs and status updates (started, completed, failed) being persisted in a local database (e.g., the local HBase cluster) and [0116] for each database comprising an HBase at geographically remote data centers. The status engine changes, at the local HBase cluster, the state to started when the retry engine retries performing the action.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner to change, at the at the distributed log storage, the state to an in-progress state when the worker retries performing the operation, as taught by Pearl, for the benefit of ensuring job execution (see e.g., Pearl, [0103]).
As to claim 12, the limitations of parent claim 1 have been discussed above. Gardner teaches
wherein when the at least one operation is completed by a worker, a completed state [invalid indication] is written to the distributed log storage (see e.g., [0098] for upon task completion, at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing, for example, the indication being in the form of a message indicating that the computing resources associated with the task are again free and ready for another task to be scheduled, the indication including the task ID, and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. When the data access is completed by the data storage system, an invalid indication is written to the resource manager memory.).
As to claim 13, the limitations of parent claim 1 have been discussed above. Gardner teaches
wherein when the state of the received message is completed, a worker of the server skips the message (see e.g., [0097] for a data access token being only issued and sent to the computing node 256 at step 432 if at step 428 the resource manager 258 indicated that the task associated with the task ID was valid, i.e. that the task identified by the task ID was still being executed by the computing node 256. When the response indicates an invalid state, the data storage system skips the data access task included in the response.).
As to claim 14, the limitations of parent claim 1 have been discussed above. Gardner does not specifically disclose transmitting, at the server, the state topic of the internal workflow schema for display. However, Pearl teaches
transmitting, at the server [collaboration platform], the state topic [status indication] of the internal workflow schema [work flow tasks] for display (see e.g., [0061] for when deployed in an organizational setting, multiple workspaces (e.g., workspace A-N) being created to support different projects or a variety of work flows and [0066] for in a user interface of the web-based collaboration platform where notifications are presented, users, via the user interface, creating action items (e.g., tasks) and delegating the action items to other users including collaborators pertaining to a work item 215, for example, the collaborators 206 being in the same workspace A 205 or the user including a newly invited collaborator, similarly, in the same user interface where discussion topics can be created in a workspace (e.g., workspace A, B or N, etc.), actionable events on work items being created and/or delegated/assigned to other users such as collaborators of a given workspace or other users, through the same user interface, task status and updates from multiple users or collaborators being indicated and reflected, and in some instances, the users performing the tasks (e.g., review or approve or reject, etc.) via the same user interface. The collaboration platform transmits the status indication of the work flow tasks for display.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner to transmit, at the server, the state topic of the internal workflow schema for display, as taught by Pearl, for the benefit of allowing users to collaborate on a task (see e.g., Pearl, [0066]).
As to claim 16, Gardner teaches a system comprising:
a distributed log storage [resource manager memory] (see e.g., [0080] for the resource manager 258 including an associated memory 324); and
a server [distributed computing system 254/workflow scheduler] having a processor and a memory that are communicatively coupled to the distributed log storage (see e.g., [0079] for the workflow scheduler 262 including a processor 312 for performing the operations of the workflow scheduler 262 and an associated memory 314 (e.g. to store applications, tasks, data, etc.), [0080] for the resource manager 258 including a processor 322 for performing the operations of the resource manager 258 and an associated memory 324 (e.g. to store applications, tasks, data, etc.) and the resource manager 258 further includes a first network interface 326 for communicating with the workflow scheduler 262 over a network and a second network interface 328 for communicating with the plurality of computing nodes, including computing node 256, over a network, and [0081] for the node manager 334 including a processor 342 for performing the operations of the node manager 334 and an associated memory 344), the server to:
receive a workflow definition (see e.g., [0069] for the workflow scheduler 262 accepting workflows from at least some of the plurality of user devices, [0070] for the user device 252 submitting a workflow to the workflow scheduler 262, [0109] for at step 504, the user device 252 submitting a workflow request to the workflow scheduler 262, the workflow request including information identifying the user, and the information identifying the user being assumed to be a user ID in FIG. 8. The workflow scheduler receives, from a user device, a workflow definition as part of a workflow request.);
generate a unique key [flow ID] for the received workflow definition (see e.g., [0109] for at step 506, the workflow scheduler 262 generating a flow ID, which is a unique ID that identifies the workflow. The workflow scheduler generates a flow ID for the received workflow definition.);
convert the received workflow definition to an internal workflow schema [series of task IDs] and set states [validity] of workflow steps [tasks] of the internal workflow schema as states [valid] using the generated key (see e.g., [0109] for at step 508, the workflow scheduler 262 transmitting a request for a workflow token to the server 270, the workflow token being specific to the workflow and including both: (i) the flow ID so that the workflow token may be tied to the specific workflow (via the flow ID), and (ii) the user ID so that the workflow token may be tied to the specific user associated with the workflow (via the user ID), at step 510, the server 270 using its private key 276 to digitally sign information that includes at least the received user ID and the flow ID, and then the server 270 generating the workflow token, and at step 512, the workflow token being transmitted from the server 270 to the workflow scheduler 262, [0110] for the workflow beginning, and as part of the workflow a particular task needing to be executed by the distributed computing system 254, therefore, at step 514 the workflow scheduler 262 transmitting a request for an action token to the server 270, the request for the action token including the workflow token, at step 516, the received workflow token being verified by the server 270 by verifying the digital signature in the received workflow token, assuming verification is successful, then at step 518 the server 270 generating an action token, the action token incorporating the information from the received workflow token, e.g. the user ID and flow ID, and the action token being digitally signed, [0111] for at step 520, the action token being transmitted from the server 270 to the workflow scheduler 262, at step 522, the action token and task being transmitted from the workflow scheduler 262 to the resource manager 258, the method of FIG. 7 then being performed, except that the action token is used in place of the user ID in FIG. 7, [0085] for at step 404, the resource manager 258 transmitting the user ID and task to the computing node 256, at step 406, the computing node 256 generating a task ID that identifies the task, at step 408, the computing node 256 transmitting the task ID to the resource manager 258, at step 410, the resource manager 258 storing an indication in its memory that the task ID is associated with a valid task, and the task being considered valid because it was scheduled by the resource manager 258 and the task has not completed execution, and [0099] for at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager converts the received workflow definition to a series of task IDs and the tasks are set as valid using the flow ID. The workflow scheduler transmits the action token, which includes the flow ID, to the resource manager in order to set the tasks as valid.);
store, at the distributed log storage, the internal workflow schema having the states to a state topic [validity indication] of the distributed log storage using the generated unique key, wherein the state topic includes the states [valid/invalid indications] of the internal workflow schema (see e.g., [0080] for the resource manager 258 including an associated memory 324 (e.g. to store applications, tasks, data, etc.) and the resource manager 258 further includes a first network interface 326 for communicating with the workflow scheduler 262 over a network and a second network interface 328 for communicating with the plurality of computing nodes, including computing node 256, over a network, [0083] for the server 270 being able to communicate with: the resource manager 258, [0111] for at step 520, the action token being transmitted from the server 270 to the workflow scheduler 262, at step 522, the action token and task being transmitted from the workflow scheduler 262 to the resource manager 258, the method of FIG. 7 then being performed, except that the action token is used in place of the user ID in FIG. 7, [0085] for at step 404, the resource manager 258 transmitting the user ID and task to the computing node 256, at step 406, the computing node 256 generating a task ID that identifies the task, at step 408, the computing node 256 transmitting the task ID to the resource manager 258, at step 410, the resource manager 258 storing an indication in its memory that the task ID is associated with a valid task, and the task being considered valid because it was scheduled by the resource manager 258 and the task has not completed execution, and [0099] for at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager memory is communicatively coupled to the workflow scheduler, server, and nodes. The series of task IDs having valid states are stored, at the resource manager memory, to an indication of validity using the action token, which includes the flow ID. The indication of validity includes valid and invalid indications of the series of task IDs.);
receive a message [response] that includes a state based on one more steps of the internal workflow schema (see e.g., [0088] for at step 426, the resource manager 258 receiving the task ID and checking whether the task ID is valid and at step 428, the resource manager 258 transmitting a response to the server 270 indicating that the task is valid, i.e. the indicating that the task identified by the task ID is still being executed by the computing node 256. The server receives a response that includes a valid indication based on a task ID of the series of task IDs.);
perform, with one or more workers [data storage system] at the server, at least one operation [data access] based on the received message (see e.g., [0096] for at step 432, a data access token being issued and transmitted to the computing node 256, the data access token being generated by the server 270, in which case the data access token may incorporate the user ID and/or task ID, the data access token further incorporating a digital signature that is generated by the server 270 using the private key 276, the digital signature being generated by the server 270 using the user ID and/or task ID, and the digital signature being verified by the data storage system 260 before allowing the data access to occur, [0097] for a data access token being only issued and sent to the computing node 256 at step 432 if at step 428 the resource manager 258 indicated that the task associated with the task ID was valid, i.e. that the task identified by the task ID was still being executed by the computing node 256, and [0098] for at step 434, the computing node 256 using the data access token received from the server 270 in order to access the data in the data storage system 260 and for example, the computing node 256 transmitting, to the data storage system 260, a request to access data, the request including the data access token, and the data storage system 260 confirming the validity of the data access token and facilitating the data access, and execution of the task by the computing node 256 continuing, and including multiple data access requests (e.g. by repeating steps 418 to 434), but eventually execution of the task finishing at step 436. The computing node transmits, to the data storage system, a request to access data based on the response including a valid indication.);
update, at the distributed log storage, the state based on the performed at least one operation (see e.g., [0098] for upon task completion, at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing, for example, the indication being in the form of a message indicating that the computing resources associated with the task are again free and ready for another task to be scheduled, the indication including the task ID, and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager memory updates the indication of validity based on completion of the performed data access.); and
internal workflow schema for the generated key (see e.g., [0109] for at step 508, the workflow scheduler 262 transmitting a request for a workflow token to the server 270, the workflow token being specific to the workflow and including both: (i) the flow ID so that the workflow token may be tied to the specific workflow (via the flow ID), and (ii) the user ID so that the workflow token may be tied to the specific user associated with the workflow (via the user ID), at step 510, the server 270 using its private key 276 to digitally sign information that includes at least the received user ID and the flow ID, and then the server 270 generating the workflow token, and at step 512, the workflow token being transmitted from the server 270 to the workflow scheduler 262, [0110] for the workflow beginning, and as part of the workflow a particular task needing to be executed by the distributed computing system 254, therefore, at step 514 the workflow scheduler 262 transmitting a request for an action token to the server 270, the request for the action token including the workflow token, at step 516, the received workflow token being verified by the server 270 by verifying the digital signature in the received workflow token, assuming verification is successful, then at step 518 the server 270 generating an action token, the action token incorporating the information from the received workflow token, e.g. the user ID and flow ID, and the action token being digitally signed, [0111] for at step 520, the action token being transmitted from the server 270 to the workflow scheduler 262, at step 522, the action token and task being transmitted from the workflow scheduler 262 to the resource manager 258, the method of FIG. 7 then being performed, except that the action token is used in place of the user ID in FIG. 7, [0085] for at step 404, the resource manager 258 transmitting the user ID and task to the computing node 256, at step 406, the computing node 256 generating a task ID that identifies the task, at step 408, the computing node 256 transmitting the task ID to the resource manager 258, at step 410, the resource manager 258 storing an indication in its memory that the task ID is associated with a valid task, and the task being considered valid because it was scheduled by the resource manager 258 and the task has not completed execution, and [0099] for at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. The resource manager converts the received workflow definition to a series of task IDs and the tasks are set as valid using the flow ID. The workflow scheduler transmits the action token, which includes the flow ID, to the resource manager in order to set the tasks as valid.).
Gardner does not specifically disclose the states being not-started states. However, Pearl teaches
the states being not-started [pending] states (see e.g., [0178] for a work item being associated with a then current pending state (e.g., state=pend)).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner to include the states being not-started states, as taught by Pearl, for the benefit of distinguishing between not-started and in-progress states for steps that have not completed (see e.g., Pearl, [0178]).
Gardner in view of Pearl does not specifically disclose compacting, at the distributed log storage, the state topic of the internal workflow schema based on the updated state, wherein the compacting reduces the states of the internal workflow schema to the current states, without intermediary states. However, Pinczel teaches
compacting, at the distributed log storage [distributed log], the state topic [field] of the internal workflow schema [log entries] based on the updated state [value], wherein the compacting reduces the states of the internal workflow schema to the current states, without intermediary states (see e.g., [0004] for using a replicated log (also known as a ‘distributed log’), [0005] for FIG. 1 showing an exemplary log 12, having three log entries 14, 16, 18 for a key-value database, each log entry 14, 16, 18 being labelled with a respective log index I, the state of the database after each log entry 14 having been applied to the database being shown above the log entries 14, thus, the first log entry 14 (with log index ‘1’) indicating that a value “x” was added to field A in the database (note that a field in a key-value database is also known as a “key”), after the first log entry 14, the database having a state 20 in which there is a value “x” in field A, the second log entry 16 (with log index ‘2’) indicating that a value “y” was added to data field B in the database, after the second log entry 16, the database having a state 22 in which there is a value “x” in field A and a value “y” in field B, the third log entry 18 (with log index ‘3’) adding a value “z” to data field A in the database, as data field A already has a value “x”, the value “z” overwriting the value “x” in data field A, and after the third log entry 18, the database having a state 24 in which there is a value “z” in field A and value “y” in field B, and [0015] for FIG. 2 showing the effect of a log cleaning approach on the example of FIG. 1, here, as the effect of the first log entry 14 has been overridden by a later log entry (i.e. the log entry with log index ‘3’ replaces the value of data field A that was set by the first log entry 14), it being fine if a newly-joined server only receives the second and third log entries 16, 18, as the server would still arrive at the correct state (A=“z” and B=“y”), thus, the first log entry 14 being removed from the log. At the distributed log, each field of the log entries is compacted based on the updated value. The compacting reduces the values of the log entries to the current values, without intermediate values.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner in view of Pearl to compact, at the distributed log storage, the state topic of the internal workflow schema based on the updated state, wherein the compacting reduces the states of the internal workflow schema to the current states, without intermediary states, as taught by Pinczel, for the benefit of maintaining consistency (or identifying inconsistencies) in replicated log-based consensus protocols that use log compaction (see e.g., Pinczel, [0023]).
Gardner and Pearl in view of Pinczel does not specifically disclose store the at least one operation performed by the one or more workers and at least one state of the internal workflow schema based on the at least one operation performed as a log topic separately from the distributed log storage, wherein the log topic is a partition of related data to which the one or more workers store the at least one state of the internal workflow schema being performed; and wherein the stored operations of the log topic are separately stored from the distributed log storage without compaction. However, Bent teaches
store the at least one operation [event] performed by the one or more workers [app] and at least one state [time] of the internal workflow schema [sequence] based on the at least one operation performed as a log topic [complete log] separately from the distributed log storage [event log storage 124], wherein the log topic is a partition of related data [re-constructed, complete, discrete events on storage device 210] to which the one or more workers store the at least one state of the internal workflow schema being performed (see e.g., [0025] for the actions or interactions of a user running the app on the user device 128 being recorded as events, [0030] for a set of event logs E collected for viewed content set C, [0031] for compaction being primarily performed on already generated event logs in order to allow for further event generation, [0032] for meta data information for a content item in this embodiment including but not being limited to, the length of the item, e.g., in terms of time, [0034] for the set of event logs E collected for viewed content set C being retrieved from one or more storage devices 124, 126 that stores the event logs and content with metadata, [0037] for with event log E, an Event Summarizer 118 in one embodiment applying one of the following functions to progressively compact the event data and this process iterating until the requisite compaction is achieved, [0038] for Drop (e): this function dropping an event e, an event being dropped only when domain rules allow it to be re-created later on the server side, and for example, a fast forward (ffwd) or rewind (rwd) event always returning to previous state; thus, play-ffwd-play-pause can be compacted to play-ffwd-pause, and a domain rule can later insert a play prior to the pause, [0042] for the compacted segments being stored in storage 124, e.g., a storage device that stores event logs, [0047] for generating as output a re-constructed, complete, discrete event logs for content X and user A, where the re-constructed sequence represents the most likely time-stamped sequence of events, as generated when user A interacted with the content X and the reconstructed complete, discrete event logs linked with context X for user A being stored in a storage device 210, and [0049] for the component 208 inserting additional events (that might have been dropped at the client side leveraging domain rules) at appropriate positions in the sequence to obtain the full re-constructed sequence, in one embodiment, this insertion of events being governed through the same state transitions modeled for the dropping of events as described above, for example, if a play event has been dropped after a rewind event after a learner has viewed a learning content this being reinserted on the server side as the server reestablishes the consistency of event sequences, and in this case, in one embodiment, the consistency of event sequences being maintained through a modeled state transition (state machine), which determines that a rewind event must be followed by a play event if the content item has not been stopped after the rewind event. An event performed by the app and a time, based on the performed event, of the sequence is stored as a complete log. The complete log storage is separate from the event log storage. The complete log is storage device 210 storing re-constructed, complete, discrete events to which the app stores the times of the sequence being performed.); and
wherein the stored operations of the log topic are separately stored from the distributed log storage without compaction (see e.g., [0042] for the compacted segments being stored in storage 124, e.g., a storage device that stores event logs, [0047] for generating as output a re-constructed, complete, discrete event logs for content X and user A, where the re-constructed sequence represents the most likely time-stamped sequence of events, as generated when user A interacted with the content X and the reconstructed complete, discrete event logs linked with context X for user A being stored in a storage device 210, and [0049] for the component 208 inserting additional events (that might have been dropped at the client side leveraging domain rules) at appropriate positions in the sequence to obtain the full re-constructed sequence, in one embodiment, this insertion of events being governed through the same state transitions modeled for the dropping of events as described above, for example, if a play event has been dropped after a rewind event after a learner has viewed a learning content this being reinserted on the server side as the server reestablishes the consistency of event sequences, and in this case, in one embodiment, the consistency of event sequences being maintained through a modeled state transition (state machine), which determines that a rewind event must be followed by a play event if the content item has not been stopped after the rewind event. The stored events of the complete log are stored in storage device 210, which is separate from event log storage 124. The stored events of the complete log are uncompacted.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner and Pearl in view of Pinczel to store the at least one operation performed by the one or more workers and at least one state of the internal workflow schema based on the at least one operation performed as a log topic separately from the distributed log storage, wherein the log topic is a partition of related data to which the one or more workers store the at least one state of the internal workflow schema being performed; and wherein the stored operations of the log topic are separately stored from the distributed log storage without compaction, as taught by Bent, for the benefit of maintaining valuable data and insights while complying with resource constraints (see e.g., Bent, [0019]).
As to claim 17, the limitations of parent claim 16 have been discussed above. Gardner teaches
wherein the server validates the received workflow definition (see e.g., [0109] for at step 506, the workflow scheduler 262 generating a flow ID, which is a unique ID that identifies the workflow, at step 508, the workflow scheduler 262 transmitting a request for a workflow token to the server 270, the workflow token being specific to the workflow and including both: (i) the flow ID so that the workflow token may be tied to the specific workflow (via the flow ID), and (ii) the user ID so that the workflow token may be tied to the specific user associated with the workflow (via the user ID), at step 510, the server 270 using its private key 276 to digitally sign information that includes at least the received user ID and the flow ID, and then the server 270 generating the workflow token, and at step 512, the workflow token being transmitted from the server 270 to the workflow scheduler 262 and [0110] for the workflow beginning, and as part of the workflow a particular task needing to be executed by the distributed computing system 254, therefore, at step 514 the workflow scheduler 262 transmitting a request for an action token to the server 270, the request for the action token including the workflow token, at step 516, the received workflow token being verified by the server 270 by verifying the digital signature in the received workflow token, assuming verification is successful, then at step 518 the server 270 generating an action token, the action token incorporating the information from the received workflow token, e.g. the user ID and flow ID, and the action token being digitally signed. The server validates the received workflow definition through verification of its digital signature.).
As to claim 18, the limitations of parent claim 16 have been discussed above. Gardner teaches
wherein the internal workflow schema includes at least one workflow step selected from the group consisting of: a hold sequential workflow step, a parallel workflow step, and a nested workflow step (see e.g., [0073] for when a computing node 256 has available resources, a task being assigned by the resource manager 258 and transmitted to the computing node 256 for execution and in some embodiments, related tasks being sent for execution to other computing nodes, e.g., for parallel execution. The series of task IDs include parallel tasks.).
As to claim 19, the limitations of parent claim 16 have been discussed above. Gardner teaches wherein
the server determines that the state of the received message is at least one selected from the group consisting of: not-started, in-progress [valid], error, and completed (see e.g., [0088] for at step 426, the resource manager 258 receiving the task ID and checking whether the task ID is valid and at step 428, the resource manager 258 transmitting a response to the server 270 indicating that the task is valid, i.e. the indicating that the task identified by the task ID is still being executed by the computing node 256. The server receives a response that includes a valid indication based on a task ID of the series of task IDs. The valid indication corresponds to in-progress.).
As to claim 20, the limitations of parent claims 16 and 19 have been discussed above. Gardner does not specifically teach wherein when the state of the received message is not-started, a worker of the server performs the at least one operation based on the received message. However, Pearl teaches
wherein when the state of the received message [workflow event] is not-started, a worker [workflow engine] of the server performs the at least one operation [sign/publish] based on the received message (see e.g., [0179] for the admin role provisioned to user u3 in workflow view 16194 indicating that user u3 can take action to complete the active work item w7, specifically, as an example, user u3 inviting user u5 from enterprise D to participate as an approver for the workflow (at workflow event 16522), responsive to the workflow event 16522, a workflow view 16195 being presented with associated instances of metadata (e.g., workflow participant metadata 16322 and work item metadata 16342), as shown, the workflow participant metadata 16322 comprising a metadata update 16361 identifying user u5 as the approver at work item w8, the work item metadata 1634.sub.2 also comprising a metadata update 16362 indicating a state change for work item w7 from active to done, and a state change for work item w8 from pend to active, and the work item w8 remaining in the active state until the task (e.g., sign) associated with work item w8 is executed, [0180] for FIG. 16C2 depicting a state transition of the workflow view 16195, the workflow participant metadata 16322, and the work item metadata 16342 earlier described in FIG. 16C1, as shown, the state transition being responsive to the user u5 from enterprise D signing the document f3 (at workflow event 16523), responsive to the workflow event 16523, a workflow view 16196 being presented with associated instances of metadata (e.g., workflow participant metadata 16323 and work item metadata 16343), as shown, the workflow participant metadata 16323 comprising a metadata update 16363 updating the role of user u4 to approver and updating the role of user u5 to reviewer, the work item metadata 16343 also comprising a metadata update 16364 indicating a state change for work item w8 from active to done and a state change for work item w9 from pend to active, and the work item w9 remaining in the active state until the task (e.g., publish) associated with work item w9 is executed, [0187] for the workflow engine 1662 generating the workflow responses 1654 based at least in part on detected instances of the workflow events 1652, a given workflow response precipitating one or more instances of content updates 1638 to the shared content 1622, and for example, a signature provided by an approver being applied in a designated location of a legal agreement without having to download, print, sign, scan, and upload the agreement, and [0188] for an event mapping 1647 in the workflow rules 1626 mapping a received workflow event to one or more operations (e.g., update metadata, update content, send workflow alert, etc.) executed at the workflow engine 1662, a logic mapping 1646 in the workflow rules 1626 and/or a mapping operation of functions within the rule API 1665 map[omg a trigger description in the work item attributes 1645 to a set of logic to be executed at the workflow engine 1662, and for example, a trigger=parent.done work item attribute mapping to a set of conditional logic (e.g., if parent.state==“done” THEN set.metadata.child.state=“active”) comprising inputs extracted from other metadata (e.g., parent attributes, state attributes, etc.). When the state of the sign workflow event is pending and the state of the select task is done, the pending state may be updated to an active state and workflow engine signs based on the sign workflow event. When the state of the publish workflow event is pending and the state of the sign task is done, the pending state may be updated to an active state and workflow engine publishes based on the publish workflow event.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner wherein when the state of the received message is not-started, a worker of the server performs the at least one operation based on the received message, as taught by Pearl, for the benefit of distinguishing between not-started and in-progress states for steps that have not completed (see e.g., Pearl, [0178]).
As to claim 21, the limitations of parent claims 16 and 19 have been discussed above. Gardner does not specifically teach wherein when the state of the received message is in-progress, a worker of the server skips the message. However, Pearl teaches
wherein when the state of the received message is in-progress [active], a worker of the server skips the message (see e.g., [0187] for the workflow engine 1662 generating the workflow responses 1654 based at least in part on detected instances of the workflow events 1652 and a workflow event corresponding to a “Forward” button click generating a workflow response comprising an update to metadata describing a work item state (e.g., from state=active to state=done) and a workflow alert sent to the user device of the owner of the next work item in the workflow. When the state of the Forward workflow event is active, the active state may be updated to a done state and the workflow engine skips execution of the Forward workflow event).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner wherein when the state of the received message is in-progress, a worker of the server skips the message, as taught by Pearl, for the benefit of distinguishing between not-started and in-progress states for steps that have not completed (see e.g., Pearl, [0178]).
As to claim 22, the limitations of parent claims 16 and 19 have been discussed above. Gardner does not specifically disclose wherein when the state of the received message is an error state, a worker of the server retries performing the operation from which the error occurred. However, Pearl teaches
wherein when the state [status] of the received message [job request] is an error state [failed], a worker [retry engine] of the server [job manager] retries performing the operation [action] from which the error occurred (see e.g., [0087] for sending job requests to the jobs manager, [0094] for a metadata attribute of a contract template including a status attribute that causes a particular action or job to be performed and in this manner, metadata or changes to metadata triggering job requests (e.g., events or actions), [0104] for the job manager including a retry engine, [0107] for one service in each cluster retrying jobs, and [0114] for the status engine 825 ensuring that jobs are executed and in one embodiment, jobs and status updates (started, completed, failed) being persisted in a local database (e.g., the local HBase cluster). When the status of the job request is failed, a retry engine of the job manager may retry performing the action from which the failure occurred.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner wherein when the state of the received message is an error state, a worker of the server retries performing the operation from which the error occurred, as taught by Pearl, for the benefit of ensuring job execution (see e.g., Pearl, [0103]).
As to claim 23, the limitations of parent claim 16 have been discussed above. Gardner teaches
wherein when the at least one operation is completed by a worker, a completed state [invalid indication] is written to the distributed log storage (see e.g., [0098] for upon task completion, at step 438 the computing node 256 transmitting an indication to the resource manager 258 that the task has finished executing, for example, the indication being in the form of a message indicating that the computing resources associated with the task are again free and ready for another task to be scheduled, the indication including the task ID, and at step 440, the resource manager 258 storing an indication in its memory that the task ID is now associated with an invalid task, i.e. a task that has completed execution. When the data access is completed by the data storage system, an invalid indication is written to the resource manager memory.).
As to claim 24, the limitations of parent claim 16 have been discussed above. Gardner does not specifically disclose wherein the server transmits the state topic of the internal workflow schema for display. However, Pearl teaches
wherein the server [collaboration platform] transmits the state topic [status indication] of the internal workflow schema [work flow tasks] for display (see e.g., [0061] for when deployed in an organizational setting, multiple workspaces (e.g., workspace A-N) being created to support different projects or a variety of work flows and [0066] for in a user interface of the web-based collaboration platform where notifications are presented, users, via the user interface, creating action items (e.g., tasks) and delegating the action items to other users including collaborators pertaining to a work item 215, for example, the collaborators 206 being in the same workspace A 205 or the user including a newly invited collaborator, similarly, in the same user interface where discussion topics can be created in a workspace (e.g., workspace A, B or N, etc.), actionable events on work items being created and/or delegated/assigned to other users such as collaborators of a given workspace or other users, through the same user interface, task status and updates from multiple users or collaborators being indicated and reflected, and in some instances, the users performing the tasks (e.g., review or approve or reject, etc.) via the same user interface. The collaboration platform transmits the status indication of the work flow tasks.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Gardner wherein the server transmits the state topic of the internal workflow schema for display, as taught by Pearl, for the benefit of allowing users to collaborate on a task (see e.g., Pearl, [0066]).
Response to Arguments
Applicant's arguments filed August 25, 2025 have been fully considered but they are not persuasive.
On pages 8-9 of Applicant’s Response, Applicant argues:
The Office Action acknowledges that Gardner and Pearl in view of Pinczel does not specifically disclose storing the at least one operation performed by the one or more workers and at least one state of the internal workflow schema based on the at least one operation performed as a log topic separately from the distributed log storage (see, e.g., page 10 of the Office Action), and relies upon Bent.
Bent appears to disclose that actions or interactions of a user running the app on the user device are recorded as events. See, e.g., paragraph 0025. Bent appears to disclose that compaction is performed on already generated event logs in order to allow for further event generation, once an entire activity has been performed by the user. An event summarizer applies a function to progressively compact event data. See, e.g., paragraph 0037. The compacted segments are stored in a storage device that stores event logs. See, e.g., paragraph 0042. That is, Bent appears to disclose recording events, compacting event logs, and storing the compacted segments, but Bent does not disclose or suggest storing at least one operation performed by one or more workers and at least one state of an internal workflow schema based on at least one operation performed as a log topic separately from a distributed log storage, where the log topic is a partition of related data to which the one or more workers store the at least one state of the internal workflow schema being performed.
Examiner respectfully disagrees with Applicant’s arguments. In addition to teaching the storage of compacted event segments, Bent also teaches the storage of complete, uncompacted event segments in a separate location. “A reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art” (see MPEP § 2123(I)). Bent recites that “[t]he component 208 in one embodiment generates as output a re-constructed, complete, discrete event logs for content X and user A, where the re-constructed sequence represents the most likely time-stamped sequence of events, as generated when user A interacted with the content X. The reconstructed complete, discrete event logs linked with context X for user A may be stored in a storage device 210” (see [0047]). Bent further recites that “[t]he component 208 may insert additional events (that might have been dropped at the client side leveraging domain rules) at appropriate positions in the sequence to obtain the full re-constructed sequence. In one embodiment, this insertion of events is governed through the same state transitions modeled for the dropping of events as described above. For example, if a play event has been dropped after a rewind event after a learner has viewed a learning content this can be reinserted on the server side as the server reestablishes the consistency of event sequences. In this case, in one embodiment, the consistency of event sequences is maintained through a modeled state transition (state machine), which determines that a rewind event must be followed by a play event if the content item has not been stopped after the rewind event” (see [0049]).
Bent’s “event” may correspond to the claimed “operation.” Bent’s “time” may correspond to the claimed “state.” Bent’s “sequence” may correspond to the claimed “internal workflow schema.” Bent’s “complete log” may correspond to the claimed “log topic.” Bent’s “reconstructed complete, discrete event logs . . . stored in a storage device 210” may correspond to the claimed “partition of related data.” Accordingly, Bent teaches storing an event and a time of the sequence as a complete log, separately from the event log storage, wherein the complete log is reconstructed complete discreet event logs stored in storage device 210 to which the times of the sequence are stored. Therefore, Bent also teaches “storing the at least one operation performed by the one or more workers and at least one state of the internal workflow schema based on the at least one operation performed as a log topic separately from the distributed log storage, wherein the log topic is a partition of related data to which the one or more workers store the at least one state of the internal workflow schema being performed,” as recited by claim 1.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Tian (US Publication No. 2021/0081396) for “the state data can be log-structure friendly data such that the history state data can increase without requiring compaction” (see [0040]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARA J GLASSER whose telephone number is (571)270-3666. The examiner can normally be reached Monday-Thursday, 10:00am-2:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached at (571)272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
02-23-2026
/DARA J GLASSER/Examiner, Art Unit 2161
/APU M MOFIZ/Supervisory Patent Examiner, Art Unit 2161