DETAILED ACTION
Claims 1-6, 8-12, and 14-29 are pending.
Claims 7 and 13 are canceled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 8-16, 20, 23, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Bond et al. (US 2022/0326974 A1) in view of Balakrishnan et al. (US 2021/0124919 A1).
Bond was cited in the previous Office Action.
Regarding claim 1, Bond teaches managing an automation workflow process capable of utilizing at least one of one or more software robots, one or more external applications and one or more user inputs (Fig. 1; Abstract: preparing a command for execution by the remote application, the command including a command identifier and being configured to require a response; transmitting the command for the remote application from the first system to the second system via the virtual channel; receiving the command at the second system and providing the command to the remote application seeking to execute the command; generating a response to the provision of the command to the remote application, the response including the command identifier; and sending the response to the first system via the virtual channel; [0158-159]: The received results may be used in order to generate further commands by either the RPA application 14 or the browser-based application 44.; [0032] transmitting user interaction (keyboard strokes, mouse movement, etc), at a local terminal (client terminal) to a remote location (central server) for updating a virtualisation of the operating environment at the central server, can also be used in a different manner to transmit commands for remote RPA operation.; [0083]; [0199] The workflow might look like: Connect to remote desktop server A. Connect to remote desktop server B. Connect to remote desktop server C. Perform tasks across remote desktop servers A, B, C depending on the overall requirement.), comprising:
processing a dataset by a first software robot to produce an automation result ([0097] results of commands; wherein the results are obtained from RPA Application 14);
detecting completion of the processing of the dataset by the first software robot ([0097] In addition, the automated command system 10 is further configured to utilise the virtual channel 18 to send the results of commands);
processing the automation result by a first external application to produce processed data ([0097] send the results of commands, which have been transmitted from the command input interface 24 to the command delivery interface 26 (in the remote desktop server); [0136] Once the command delivery interface 26 receives the requests, the command delivery interface 26 then decrypts the received request and performs the requested action which may involve performing an API call, running a program, inputting data into an application or web browser, moving the mouse or any other mechanism required to fulfil the request.);
detecting completion of the processing of the automation result by the first external application ([0097] back to the command interface 24 and ultimately back to the remote process automation application 14. Such results may comprise a notification that the executed command was successful or an error code if the command was not executed successfully. In some cases, the initially transmitted command may generate an output (such as in the case where the command is a request for a value of a particular item in a spreadsheet). In these cases, the results transmitted back to the command input interface 24 comprise the requested output, in addition to a notification that the executed command was successful.; [0158] In some embodiments, the results of the action are provided back to the command input interface 24 from which the request originates.);
presenting a user interface including at least a portion of the processed data and detecting receipt of an input via the user interface ([0083] The updated virtualisation, or just the resultant changes, would then typically be transmitted back to the client terminal device 12 and changes caused by the user interaction with the graphical user interface would be displayed and further consequential user interaction may be captured.;[0158]; [0159]; [0173]).
Bond teaches a system and method for automated process orchestration but does not explicitly teach a non-transitory computer readable medium including at least computer program code stored thereon for managing an automation workflow process and computer program code for performing the steps of the methods; and
wherein the processed data includes a confidence indicator associated with the processing performed by the first external application, and wherein a value of the confidence indicator being less than a predetermined threshold indicates that validation of the processed data is required.
However, Balakrishnan teaches a non-transitory computer readable medium including at least computer program code stored thereon for managing an automation workflow process ([0039] The processing element or elements may be programmed with a set of executable instructions (e.g., software instructions), where the instructions may be stored on (or in) a suitable non-transitory data storage element. In some embodiments, one or more of the operations, functions, processes, or methods described herein may be implemented by a specialized form of hardware, such as a programmable gate array, application specific integrated circuit (ASIC), or the like.; [0133] workflow for authenticating/verifying a document; [0007] the image and text processing described herein could be used with robotic-process-automation effort) and computer program code for performing the steps of the methods ([0039]); and
wherein the processed data includes a confidence indicator associated with the processing performed by the first external application, and wherein a value of the confidence indicator being less than a predetermined threshold indicates that validation of the processed data is required (Fig. 1c steps 145-147,; [0007]; [0088] FIG. 1(c) is a second flowchart or flow diagram illustrating an example process, operation, method, or function 130 for authenticating/verifying a document, in accordance with some embodiments of the system and methods described herein. These processing steps or stages may be described in further detail as follows: [0089] Receive or access an image of a subject document (as suggested by step or stage 132) [0090] as examples, the image may be a photograph, scan or generated by use of an OCR process; [0088-119]; [0091] Process the image of the subject document to identify and extract one or more invariable attributes of the subject document (step or stage 133); [0092] where the invariable attributes may include labels, titles, headers, field names, logos, holograms, seals, or similar features that can be recognized with confidence even if an image is skewed or distorted, and do not represent information or data specific to a person in possession of the document (such as data inserted into a field, a birth date, an address, etc.); [0102] Generating a score or metric reflecting a confidence level or accuracy of the identified attributes and/or the document type (i.e., a measure of the match or closeness of a match to a template) based on the transformation and extracted invariable attributes; [0103] Determining if the generated score satisfies (typically by exceeding) a threshold value or confidence level; [0105] If the generated score does not satisfy the threshold value, then re-evaluating the subject document (rescoring) using one or more of additional invariable attributes, inspection of the subject document by a person, or use of a different methodology to determine the correct document type).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Balakrishnan of implementing the methods in a non-transitory computer readable medium with the method as taught by Bond. The modification would have been motivated by the desire of combining known elements such as non-transitory CRMs to implement methods to yield predictable results of implementing the method in different systems by executing the code stored in the CRM.
Regarding claim 2, Balakrishnan teaches wherein the first external application provides data extracted from an image of a document ([0089] Receive or access an image of a subject document (as suggested by step or stage 132) [0090] as examples, the image may be a photograph, scan or generated by use of an OCR process.).
Regarding claim 3, Balakrishnan teaches wherein the first external application provides data extraction and validation of extracted data ([0089] Receive or access an image of a subject document (as suggested by step or stage 132) [0090] as examples, the image may be a photograph, scan or generated by use of an OCR process; Fig. 1(c) verification and validation of invariable attributes)
Regarding claim 4, Balakrishnan teaches wherein the first external application provides an evaluation of quality of data extracted from an image of a document ([0127] quality detection; [0128]).
Regarding claim 5, Bond teaches managing an automation workflow process capable of utilizing one or more software robots, one or more external applications and/or one or more human inputs (Abstract: preparing a command for execution by the remote application, the command including a command identifier and being configured to require a response; transmitting the command for the remote application from the first system to the second system via the virtual channel; receiving the command at the second system and providing the command to the remote application seeking to execute the command; generating a response to the provision of the command to the remote application, the response including the command identifier; and sending the response to the first system via the virtual channel; [0158-159]: The received results may be used in order to generate further commands by either the RPA application 14 or the browser-based application 44.; [0032] transmitting user interaction (keyboard strokes, mouse movement, etc), at a local terminal (client terminal) to a remote location (central server) for updating a virtualisation of the operating environment at the central server, can also be used in a different manner to transmit commands for remote RPA operation.; [0083]; [0199] The workflow might look like: Connect to remote desktop server A. Connect to remote desktop server B. Connect to remote desktop server C. Perform tasks across remote desktop servers A, B, C depending on the overall requirement.), comprising:
processing a dataset by a first external application to produce processed data ([0097] send the results of commands, which have been transmitted from the command input interface 24 to the command delivery interface 26 (in the remote desktop server); [0136] Once the command delivery interface 26 receives the requests, the command delivery interface 26 then decrypts the received request and performs the requested action which may involve performing an API call, running a program, inputting data into an application or web browser, moving the mouse or any other mechanism required to fulfil the request.);
detecting completion of the processing of the dataset by the first external application ([0097] back to the command interface 24 and ultimately back to the remote process automation application 14. Such results may comprise a notification that the executed command was successful or an error code if the command was not executed successfully. In some cases, the initially transmitted command may generate an output (such as in the case where the command is a request for a value of a particular item in a spreadsheet). In these cases, the results transmitted back to the command input interface 24 comprise the requested output, in addition to a notification that the executed command was successful.; [0158] In some embodiments, the results of the action are provided back to the command input interface 24 from which the request originates.);
presenting a user interface including at least a portion of the processed data and detecting receipt of a human input with respect to the user interface ([0083] The updated virtualisation, or just the resultant changes, would then typically be transmitted back to the client terminal device 12 and changes caused by the user interaction with the graphical user interface would be displayed and further consequential user interaction may be captured.; [0159]).
In addition, Balakrishnan teaches a non-transitory computer readable medium including at least computer program code stored thereon for managing an automation workflow process ([0039] The processing element or elements may be programmed with a set of executable instructions (e.g., software instructions), where the instructions may be stored on (or in) a suitable non-transitory data storage element. In some embodiments, one or more of the operations, functions, processes, or methods described herein may be implemented by a specialized form of hardware, such as a programmable gate array, application specific integrated circuit (ASIC), or the like.; [0133] workflow for authenticating/verifying a document; [0007] the image and text processing described herein could be used with robotic-process-automation effort) and computer program code for performing the steps of the methods ([0039]); and
wherein the processed data includes a confidence indicator associated with the processing performed by the first external application, and wherein a value of the confidence indicator being less than a predetermined threshold indicates that validation of the processed data is required (Fig. 1c steps 145-147, [0088] FIG. 1(c) is a second flowchart or flow diagram illustrating an example process, operation, method, or function 130 for authenticating/verifying a document, in accordance with some embodiments of the system and methods described herein. These processing steps or stages may be described in further detail as follows: [0089] Receive or access an image of a subject document (as suggested by step or stage 132) [0090] as examples, the image may be a photograph, scan or generated by use of an OCR process; [0088-119]; [0091] Process the image of the subject document to identify and extract one or more invariable attributes of the subject document (step or stage 133); [0092] where the invariable attributes may include labels, titles, headers, field names, logos, holograms, seals, or similar features that can be recognized with confidence even if an image is skewed or distorted, and do not represent information or data specific to a person in possession of the document (such as data inserted into a field, a birth date, an address, etc.); [0102] Generating a score or metric reflecting a confidence level or accuracy of the identified attributes and/or the document type (i.e., a measure of the match or closeness of a match to a template) based on the transformation and extracted invariable attributes; [0103] Determining if the generated score satisfies (typically by exceeding) a threshold value or confidence level; [0105] If the generated score does not satisfy the threshold value, then re-evaluating the subject document (rescoring) using one or more of additional invariable attributes, inspection of the subject document by a person, or use of a different methodology to determine the correct document type).
Regarding claim 6, it recites similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above.
Regarding claim 8, it recites similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above.
Regarding claim 9, Bond teaches processing at least a portion of the processed data by a first software robot to produce an automation result and detecting completion of the processing by first software robot ([0097] obtaining a result from the RPA).
In addition, Balakrishnan as cited above for claim 5 teaches the non-transitory computer readable medium and instructions for performing the method.
Regarding claim 10, Bond teaches presenting another user interface including at least a portion of the automation result and detecting receipt of an input with respect to the another user interface ([0083] The updated virtualisation, or just the resultant changes, would then typically be transmitted back to the client terminal device 12 and changes caused by the user interaction with the graphical user interface would be displayed and further consequential user interaction may be captured.).
In addition, Balakrishnan as cited above for claim 5 teaches the non-transitory computer readable medium and instructions for performing the method.
Regarding claim 11, Bond teaches managing a workflow process using a software automation system ([0128] Referring firstly to the client terminal device implementation 12, the command input interface 24 receives remote process automation “process flows” from the RPA application 14 on the client terminal device 12. These process flows will typically comprise commands which are intended for execution on the remote desktop server session 22.), comprising:
providing input information to and requesting information from an external software application ([0082] receive commands from a Robotic Process Automation (RPA) application 14 on a client terminal device 12 and to cause these commands to be transmitted via a communications link 15 to and executed on a remote desktop server 16 from within a remote desktop server session. [0097] send the results of commands (results from the RPA execution), which have been transmitted from the command input interface 24 to the command delivery interface 26 (in the remote desktop server); [0136] Once the command delivery interface 26 receives the requests, the command delivery interface 26 then decrypts the received request and performs the requested action which may involve performing an API call, running a program, inputting data into an application or web browser, moving the mouse or any other mechanism required to fulfil the request.);
receiving output information from the external software application (0158] In some embodiments, the results of the action are provided back to the command input interface 24 from which the request originates.);
determining a characteristic of the output information that impacts the ability of the output information to be successfully utilized as input information for use by the software automation system or the workflow process ([0097] Such results may comprise a notification that the executed command was successful or an error code if the command was not executed successfully (i.e., characteristic). In some cases, the initially transmitted command may generate an output (such as in the case where the command is a request for a value of a particular item in a spreadsheet). In these cases, the results transmitted back to the command input interface 24 comprise the requested output, in addition to a notification that the executed command was successful);
presenting a user interface able to receive user input from a human user and receiving, via the user interface, a first input from the human user that supplements or alters the output information from the external application ([0083] The updated virtualisation, or just the resultant changes, would then typically be transmitted back to the client terminal device 12 and changes caused by the user interaction with the graphical user interface would be displayed and further consequential user interaction may be captured.; [0159]).
In addition, Balakrishnan teaches a non-transitory computer readable medium including at least computer program code stored thereon for managing an automation workflow process ([0039] The processing element or elements may be programmed with a set of executable instructions (e.g., software instructions), where the instructions may be stored on (or in) a suitable non-transitory data storage element. In some embodiments, one or more of the operations, functions, processes, or methods described herein may be implemented by a specialized form of hardware, such as a programmable gate array, application specific integrated circuit (ASIC), or the like.; [0133] workflow for authenticating/verifying a document; [0007] the image and text processing described herein could be used with robotic-process-automation effort) and computer program code for performing the steps of the methods ([0039]); and
wherein the output information includes a confidence indicator associated with the processing performed by the first external application on the input information, and wherein a value of the confidence indicator being less than a predetermined threshold indicates that validation of the output information is required (Fig. 1c steps 145-147, [0088] FIG. 1(c) is a second flowchart or flow diagram illustrating an example process, operation, method, or function 130 for authenticating/verifying a document, in accordance with some embodiments of the system and methods described herein. These processing steps or stages may be described in further detail as follows: [0089] Receive or access an image of a subject document (as suggested by step or stage 132) [0090] as examples, the image may be a photograph, scan or generated by use of an OCR process; [0088-119]; [0091] Process the image of the subject document to identify and extract one or more invariable attributes of the subject document (step or stage 133); [0092] where the invariable attributes may include labels, titles, headers, field names, logos, holograms, seals, or similar features that can be recognized with confidence even if an image is skewed or distorted, and do not represent information or data specific to a person in possession of the document (such as data inserted into a field, a birth date, an address, etc.); [0102] Generating a score or metric reflecting a confidence level or accuracy of the identified attributes and/or the document type (i.e., a measure of the match or closeness of a match to a template) based on the transformation and extracted invariable attributes; [0103] Determining if the generated score satisfies (typically by exceeding) a threshold value or confidence level; [0105] If the generated score does not satisfy the threshold value, then re-evaluating the subject document (rescoring) using one or more of additional invariable attributes, inspection of the subject document by a person, or use of a different methodology to determine the correct document type).
Regarding claim 12, Bond teaches receiving, via the user interface, a second input comprising instructions to accept the output information as supplemented or altered by the first input as input information for use by the software automation system or the workflow process ([0083]).
In addition, Balakrishnan as cited above for claim 11 teaches the non-transitory computer readable medium and instructions for performing the method.
Regarding claim 13, it has similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above.
Regarding claim 14, Balakrishnan teaches wherein the characteristic includes a textual identifier (Fig. 1 (c); [0089] Receive or access an image of a subject document (as suggested by step or stage 132) [0090] as examples, the image may be a photograph, scan or generated by use of an OCR process; [0091] Process the image of the subject document to identify and extract one or more invariable attributes of the subject document (step or stage 133); [0092] where the invariable attributes may include labels, titles, headers, field names, logos, holograms, seals, or similar features that can be recognized with confidence even if an image is skewed or distorted).
Regarding claim 15, Balakrishnan teaches wherein the external software application provides OCR data extraction from an image of a document ([0090] as examples, the image may be a photograph, scan or generated by use of an OCR process).
Regarding claim 16, Balakrishnan teaches wherein the external software application provides computer-determined labeling for a document ([0091] Process the image of the subject document to identify and extract one or more invariable attributes of the subject document (step or stage 133); [0092] where the invariable attributes may include labels, titles, headers, field names, logos, holograms, seals, or similar features that can be recognized with confidence even if an image is skewed or distorted, and do not represent information or data specific to a person in possession of the document (such as data inserted into a field, a birth date, an address, etc.)).
Regarding claim 20, Bond teaches managing an automation workflow process utilizing software robots, external applications and human input (Abstract: preparing a command for execution by the remote application, the command including a command identifier and being configured to require a response; transmitting the command for the remote application from the first system to the second system via the virtual channel; receiving the command at the second system and providing the command to the remote application seeking to execute the command; generating a response to the provision of the command to the remote application, the response including the command identifier; and sending the response to the first system via the virtual channel; [0158-159]: The received results may be used in order to generate further commands by either the RPA application 14 or the browser-based application 44.; [0032] transmitting user interaction (keyboard strokes, mouse movement, etc), at a local terminal (client terminal) to a remote location (central server) for updating a virtualisation of the operating environment at the central server, can also be used in a different manner to transmit commands for remote RPA operation.; [0083]; [0199] The workflow might look like: Connect to remote desktop server A. Connect to remote desktop server B. Connect to remote desktop server C. Perform tasks across remote desktop servers A, B, C depending on the overall requirement.), comprising:
identifying a first human task to be included in the automation workflow process being created ([0003] Many processes involved in day-to-day work life are now conducted digitally on local and networked computer systems. Included within these processes are mundane and repetitive tasks which typically a human is required to process, even when the nature of these tasks is reasonably straightforward. Examples of such processes can be found in simple data entry tasks, where no transformative action is required by the human user or moving files between different locations in a file system.; [0004] Tasks of this nature are suitable to be conducted by an automated robot worker (or virtual worker), since the tasks themselves follow a known routine, and the automated virtual worker can be programmed to perform these tasks reliably and accurately. Outsourcing of these tasks to an automated virtual worker typically enables these tasks to be performed more quickly than is possible when a user carries out the processes manually.);
configuring the first human task to present a user interface to a person and to capture a data input therefrom ([0083] Such user interactions may comprise of keyboard strokes, mouse movement, mouse clicks etc. Typically, a dedicated virtual channel would be set up for each different type of user interaction, such as one for keystrokes and a different one running in parallel for mouse movement. This user interaction data would typically be used to update a virtualisation of the operating environment at the remote desktop server session 22. The updated virtualisation, or just the resultant changes, would then typically be transmitted back to the client terminal device 12 and changes caused by the user interaction with the graphical user interface would be displayed and further consequential user interaction may be captured.);
identifying a first robotic task to be included ([0003-4]) in the automation workflow process being created ([0043] In some embodiments, an RPA script includes computer code configured to automatically execute a process that may have traditionally been performed by a human operator (e.g., processing invoices, preparing periodic reports, granting mortgage applications, etc.). It can be beneficial to use an RPA script for a number of reasons. For example, RPA scripts may execute the same process quicker and more accurately than a human operator. In some embodiments, RPA scripts may be used in applications in which the avoidance of conscious and/or subconscious human bias from process execution is critical (e.g., because an RPA script may be configured to not consider prohibited factors in its decision-making). [0051] In some embodiments, the computer script may be manually created by a developer.);
arranging the first robotic task to follow after the first human task within the automation workflow process being created ([0128] Referring firstly to the client terminal device implementation 12, the command input interface 24 receives remote process automation “process flows” from the RPA application 14 on the client terminal device 12.; [0129]);
configuring the first robotic task to utilize a first software robot, and to receive as an input at least a portion of the data input that the first human task provided ([0032] The present invention resides in the appreciation that the virtual channel, which is used in remote desktop technologies for transmitting user interaction (keyboard strokes, mouse movement, etc), at a local terminal (client terminal) to a remote location (central server) for updating a virtualisation of the operating environment at the central server, can also be used in a different manner to transmit commands for remote RPA operation.; [0083] Such user interactions may comprise of keyboard strokes, mouse movement, mouse clicks etc. Typically, a dedicated virtual channel would be set up for each different type of user interaction, such as one for keystrokes and a different one running in parallel for mouse movement. This user interaction data would typically be used to update a virtualisation of the operating environment at the remote desktop server session 22. The updated virtualisation, or just the resultant changes, would then typically be transmitted back to the client terminal device 12 and changes caused by the user interaction with the graphical user interface would be displayed and further consequential user interaction may be captured. As is discussed above, when remote process automation is used over a remote desktop session in this manner it is typically necessary to use the above-described Surface Automation process.; [0097]);
identifying a first external application to be accessed during the automation workflow process being created ([0082] receive commands from a Robotic Process Automation (RPA) application 14 on a client terminal device 12 and to cause these commands to be transmitted via a communications link 15 to and executed on a remote desktop server 16 from within a remote desktop server session.; [0086] By way of example, the commands sent may be used in the following automated scenarios: [0087] a) Microsoft Office automation (for office-related applications such as Excel, Word, PowerPoint and Outlook); [0097]);
arranging the first external application to be accessed following after the first human task or the first robotic task within the automation workflow process and configuring the automation workflow process to receive data from the first external application being accessed ([0097] In addition, the automated command system 10 is further configured to utilise the virtual channel 18 to send the results of commands, which have been transmitted from the command input interface 24 to the command delivery interface 26, back to the command interface 24 and ultimately back to the remote process automation application 14. Such results may comprise a notification that the executed command was successful or an error code if the command was not executed successfully. In some cases, the initially transmitted command may generate an output (such as in the case where the command is a request for a value of a particular item in a spreadsheet). In these cases, the results transmitted back to the command input interface 24 comprise the requested output, in addition to a notification that the executed command was successful.).
In addition, Balakrishnan teaches a non-transitory computer readable medium including at least computer program code stored thereon for managing an automation workflow process ([0039] The processing element or elements may be programmed with a set of executable instructions (e.g., software instructions), where the instructions may be stored on (or in) a suitable non-transitory data storage element. In some embodiments, one or more of the operations, functions, processes, or methods described herein may be implemented by a specialized form of hardware, such as a programmable gate array, application specific integrated circuit (ASIC), or the like.; [0133] workflow for authenticating/verifying a document; [0007] the image and text processing described herein could be used with robotic-process-automation effort) and computer program code for performing the steps of the methods ([0039]);
wherein the received data includes a confidence indicator associated with a processing performed by the first external application, and wherein a value of the confidence indicator being less than a predetermined threshold indicates that validation of the received data is required (Fig. 1c steps 145-147,; [0007]; [0088] FIG. 1(c) is a second flowchart or flow diagram illustrating an example process, operation, method, or function 130 for authenticating/verifying a document, in accordance with some embodiments of the system and methods described herein. These processing steps or stages may be described in further detail as follows: [0089] Receive or access an image of a subject document (as suggested by step or stage 132) [0090] as examples, the image may be a photograph, scan or generated by use of an OCR process; [0088-119]; [0091] Process the image of the subject document to identify and extract one or more invariable attributes of the subject document (step or stage 133); [0092] where the invariable attributes may include labels, titles, headers, field names, logos, holograms, seals, or similar features that can be recognized with confidence even if an image is skewed or distorted, and do not represent information or data specific to a person in possession of the document (such as data inserted into a field, a birth date, an address, etc.); [0102] Generating a score or metric reflecting a confidence level or accuracy of the identified attributes and/or the document type (i.e., a measure of the match or closeness of a match to a template) based on the transformation and extracted invariable attributes; [0103] Determining if the generated score satisfies (typically by exceeding) a threshold value or confidence level; [0105] If the generated score does not satisfy the threshold value, then re-evaluating the subject document (rescoring) using one or more of additional invariable attributes, inspection of the subject document by a person, or use of a different methodology to determine the correct document type).
Regarding claim 23, Balakrishnan teaches wherein the computer readable medium comprises: computer program code for supporting a validation user interface, the validation user interface facilitating a user in validating data acquired by or for the first external application ([0105] If the generated score does not satisfy the threshold value, then re-evaluating the subject document (rescoring) using one or more of additional invariable attributes, inspection of the subject document by a person, or use of a different methodology to determine the correct document type).
Regarding claim 29, it is a method claim having similar limitations as claim 11. Therefore, it is rejected under the same rationale above.
Claims 17-19, 21-22, and 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over Bond et al. (US 2022/0326974 A1) in view of Balakrishnan et al. (US 2021/0124919 A1), in further view of Foth et al. (US 2021/0288823 A1).
Regarding claim 17, Bond teaches a robotic process automation system ([0081] RPA systems), comprising:
the software robots providing automated interaction with one or more software programs operating on one or more computing devices ([0004] Tasks of this nature are suitable to be conducted by an automated robot worker (or virtual worker), since the tasks themselves follow a known routine, and the automated virtual worker can be programmed to perform these tasks reliably and accurately. Outsourcing of these tasks to an automated virtual worker typically enables these tasks to be performed more quickly than is possible when a user carries out the processes manually.; [0005] The assignment of these tasks to automated virtual workers is often classed as Remote Process Automation (RPA).; [0086] By way of example, the commands sent may be used in the following automated scenarios: [0087] a) Microsoft Office automation (for office-related applications such as Excel, Word, PowerPoint and Outlook); [0088] b) Web browser automation (for internet browsers such as Chrome and Internet Explorer/Edge); [0089] c) Launch applications (which help to locate and start other computer programs); [0090] d) Interact with applications using the Microsoft Win32 and UIA APIs (e.g. enter text, press buttons, click menus) which in turn illicit a response in the application.);
wherein at least a particular automation workflow process of the created automation workflow processes includes a determined sequence of performing a plurality of tasks, at least one of the tasks in the determined sequence being a robotic task that is performed by one of the software robots, at least another of the tasks in the determined sequence being a human task that is performed to receive interaction with a person, and at least another of the tasks in the determined sequence being an interaction with a software application ([0083] The updated virtualisation, or just the resultant changes, would then typically be transmitted back to the client terminal device 12 and changes caused by the user interaction with the graphical user interface would be displayed and further consequential user interaction may be captured; [0097] In addition, the automated command system 10 is further configured to utilise the virtual channel 18 to send the results of commands, which have been transmitted from the command input interface 24 to the command delivery interface 26, back to the command interface 24 and ultimately back to the remote process automation application 14. Such results may comprise a notification that the executed command was successful or an error code if the command was not executed successfully. In some cases, the initially transmitted command may generate an output (such as in the case where the command is a request for a value of a particular item in a spreadsheet). In these cases, the results transmitted back to the command input interface 24 comprise the requested output, in addition to a notification that the executed command was successful. [0128] Referring firstly to the client terminal device implementation 12, the command input interface 24 receives remote process automation “process flows” from the RPA application 14 on the client terminal device 12. These process flows will typically comprise commands which are intended for execution on the remote desktop server session 22. The commands will typically be received by the command input interface 24 from the RPA application 14 using a Simple Object Access Protocol (SOAP) API interface (134—see FIG. 5). Whilst a SOAP API interface is specified, it is to be appreciated that any appropriate protocol which enables the system 10, 40 to achieve its desired functionality may be used. Once the commands have been received using a suitable protocol language, these commands are sent to the command delivery interface 26, typically using the external communications network 28.; [0129] In some unseen embodiments, the RPA application 14 may also be configured to provide automated commands to be run on applications which are installed on the client terminal device 12. In this regard, the command input interface 24 may be configured to act as a switch which in one configuration enables automated process orchestration to occur on the remotely located computer system but in another configuration does not access the remote session virtual channel but rather acts on applications which are now found on the local computer system environment.), and
wherein performance of the particular automation workflow process performs the tasks of the particular automation workflow process in the determined sequence, the performance including causing the one of the software robots for the robotic task to be performed ([0097] “results of commands” are results of the RPA application), causing a user interface to be presented to the person in performing the human task ([0083] The updated virtualisation, or just the resultant changes, would then typically be transmitted back to the client terminal device 12 and changes caused by the user interaction with the graphical user interface would be displayed and further consequential user interaction may be captured.), and causing interaction with the software application to provide data to the software application and received returned data from the software application ([0097] In addition, the automated command system 10 is further configured to utilise the virtual channel 18 to send the results of commands, which have been transmitted from the command input interface 24 to the command delivery interface 26, back to the command interface 24 and ultimately back to the remote process automation application 14. Such results may comprise a notification that the executed command was successful or an error code if the command was not executed successfully. In some cases, the initially transmitted command may generate an output (such as in the case where the command is a request for a value of a particular item in a spreadsheet). In these cases, the results transmitted back to the command input interface 24 comprise the requested output, in addition to a notification that the executed command was successful.).
a data store configured to store a plurality of software robots;
a workflow process platform configured to enable users to (i) create automation workflow processes;
(ii) perform automation workflow processes that have been created;
wherein the returned data includes a confidence indicator associated with processing of the data performed by the software application, and wherein a value of the confidence indicator being less than a predetermined threshold indicates that validation of the processed data is required.
However, Foth teaches a data store configured to store a plurality of software robots ([0051] a computer script (e.g., an RPA script); [0100] For example, two or more of nodes 452a-452c (e.g., via data repositories 456a-c) may store all or portions of tools (e.g., a computer script, a runtime utility, a log file, and/or dependent programs).), a workflow process platform configured to enable users to (i) create automation workflow processes ([0043] In some embodiments, an RPA script includes computer code configured to automatically execute a process that may have traditionally been performed by a human operator (e.g., processing invoices, preparing periodic reports, granting mortgage applications, etc.). It can be beneficial to use an RPA script for a number of reasons. For example, RPA scripts may execute the same process quicker and more accurately than a human operator. In some embodiments, RPA scripts may be used in applications in which the avoidance of conscious and/or subconscious human bias from process execution is critical (e.g., because an RPA script may be configured to not consider prohibited factors in its decision-making). [0051] In some embodiments, the computer script may be manually created by a developer.), and (ii) perform automation workflow processes that have been created ([0043] In some embodiments, an RPA script includes computer code configured to automatically execute a process that may have traditionally been performed by a human operator (e.g., processing invoices, preparing periodic reports, granting mortgage applications, etc.). It can be beneficial to use an RPA script for a number of reasons. For example, RPA scripts may execute the same process quicker and more accurately than a human operator.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Foth of having a plurality of stored RPAs, create workflow processes and perform automation workflow processes. The modification would have been motivated by the desire of allowing a user to automate and offload processing of certain tasks to optimize processing times.
Bond nor Foth expressly teach wherein the returned data includes a confidence indicator associated with processing of the data performed by the software application, and wherein a value of the confidence indicator being less than a predetermined threshold indicates that validation of the processed data is required.
However, Balakrishnan teaches wherein the returned data includes a confidence indicator associated with processing of the data performed by the software application, and wherein a value of the confidence indicator being less than a predetermined threshold indicates that validation of the processed data is required (Fig. 1c steps 145-147,; [0007]; [0088] FIG. 1(c) is a second flowchart or flow diagram illustrating an example process, operation, method, or function 130 for authenticating/verifying a document, in accordance with some embodiments of the system and methods described herein. These processing steps or stages may be described in further detail as follows: [0089] Receive or access an image of a subject document (as suggested by step or stage 132) [0090] as examples, the image may be a photograph, scan or generated by use of an OCR process; [0088-119]; [0091] Process the image of the subject document to identify and extract one or more invariable attributes of the subject document (step or stage 133); [0092] where the invariable attributes may include labels, titles, headers, field names, logos, holograms, seals, or similar features that can be recognized with confidence even if an image is skewed or distorted, and do not represent information or data specific to a person in possession of the document (such as data inserted into a field, a birth date, an address, etc.); [0102] Generating a score or metric reflecting a confidence level or accuracy of the identified attributes and/or the document type (i.e., a measure of the match or closeness of a match to a template) based on the transformation and extracted invariable attributes; [0103] Determining if the generated score satisfies (typically by exceeding) a threshold value or confidence level; [0105] If the generated score does not satisfy the threshold value, then re-evaluating the subject document (rescoring) using one or more of additional invariable attributes, inspection of the subject document by a person, or use of a different methodology to determine the correct document type).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Balakrishnan of implementing the methods in a non-transitory computer readable medium with the method as taught by Bond. The modification would have been motivated by the desire of combining known elements such as non-transitory CRMs to implement methods to yield predictable results of implementing the method in different systems by executing the code stored in the CRM.
Regarding claim 18, Bond teaches wherein the workflow process platform manages the performance of the particular automation workflow process by operating to at least:
determine a first task within the particular automation workflow process that is to be performed, cause the first task to be performed on a first computing device, receive an indication that the first task has completed, determine a subsequent task within the particular automation workflow process that is to be performed after the first task, cause the subsequent task to be performed on a second computing device, and receive an indication that the subsequent task has completed (Abstract; [0082-83]; [0097]).
Regarding claim 19, Bond teaches wherein, in creating the particular automation workflow process via the workflow process platform, the workflow process platform is configured to at least: identify a first task to be included in the automation workflow process being created, identify a second task to be included in the automation workflow process being created, and arrange the second task to follow after the first task within the automation workflow process ([0083]; [0097]; [0128-129] show a process flow in which tasks that are performed by the RPA are performed and subsequent tasks are outsourced via a private connection to a second/external service).
Regarding claim 21, Foth teaches wherein the computer program code for arranging the first external application to be accessed following after the first human task or the first robotic task within the automation workflow process, comprises:
computer program code for presenting a configuration user interface, the configuration user interface facilitating a user in specifying output data from the first robotic task that is to be provided as input data to the first external application ([0044] In some embodiments, an RPA script may be written by a developer to automate one or more particular processes… A process, sub-process, thread, and/or service may continually monitor actions taken by a user, actions taken by a program on behalf of a user, etc. For example, a process, sub-process, thread, and/or service may observe that a user opens an email received from “invoices@company.com” and downloads an attachment titled “April Invoice.” A process, sub-process, thread, and/or service may observe that a user copies a number next to text that reads “Amount Owed” and enters the number into a user interface for an internal database. A process, sub-process, thread, and/or service may then generate an RPA script based on observations of user activity. An RPA script may use APIs to interface directly with relevant applications, an RPA script may interface with applications indirectly through, for example, keyboard and mouse control, or any other interface methods may be used.; [0048] In some embodiments, an output executable and/or binary file may call one or more external programs that may be used in executing an RPA script. For example, an RPA script may rely on a word processing application to perform one or more steps during execution. An RPA script (or an executable and/or binary file based on an RPA script) may open the dependent application to execute its function.).
Regarding claim 22, Bond teaches wherein the first external application produces validated data, and wherein the validated data from the first external application is directed by the automation workflow process as input to another task within the automation workflow process ([0097] In addition, the automated command system 10 is further configured to utilise the virtual channel 18 to send the results of commands, which have been transmitted from the command input interface 24 to the command delivery interface 26, back to the command interface 24 and ultimately back to the remote process automation application 14. Such results may comprise a notification that the executed command was successful or an error code if the command was not executed successfully. In some cases, the initially transmitted command may generate an output (such as in the case where the command is a request for a value of a particular item in a spreadsheet). In these cases, the results transmitted back to the command input interface 24 comprise the requested output, in addition to a notification that the executed command was successful.).
Regarding claim 27, Foth teaches wherein the computer program code for arranging the first external application to be accessed following after the first human task or the first robotic task within the automation workflow process, comprises: computer program code for presenting a configuration user interface, the configuration user interface facilitating a user in arranging the first external application to be accessed following after the first human task or the first robotic task within the automation workflow process ([0044] In some embodiments, an RPA script may be written by a developer to automate one or more particular processes… A process, sub-process, thread, and/or service may continually monitor actions taken by a user, actions taken by a program on behalf of a user, etc. For example, a process, sub-process, thread, and/or service may observe that a user opens an email received from “invoices@company.com” and downloads an attachment titled “April Invoice.” A process, sub-process, thread, and/or service may observe that a user copies a number next to text that reads “Amount Owed” and enters the number into a user interface for an internal database. A process, sub-process, thread, and/or service may then generate an RPA script based on observations of user activity. An RPA script may use APIs to interface directly with relevant applications, an RPA script may interface with applications indirectly through, for example, keyboard and mouse control, or any other interface methods may be used.; [0048] In some embodiments, an output executable and/or binary file may call one or more external programs that may be used in executing an RPA script. For example, an RPA script may rely on a word processing application to perform one or more steps during execution. An RPA script (or an executable and/or binary file based on an RPA script) may open the dependent application to execute its function.).
Regarding claim 28, Foth teaches wherein the configuration user interface includes at least a first data input field to specify an element name, a second data input field to identify a task name, and a third data input field to specify a document identifier ([0044] In some embodiments, an RPA script may be written by a developer to automate one or more particular processes… A process, sub-process, thread, and/or service may continually monitor actions taken by a user, actions taken by a program on behalf of a user, etc. For example, a process, sub-process, thread, and/or service may observe that a user opens an email received from “invoices@company.com” and downloads an attachment titled “April Invoice.” A process, sub-process, thread, and/or service may observe that a user copies a number next to text that reads “Amount Owed” and enters the number into a user interface for an internal database. A process, sub-process, thread, and/or service may then generate an RPA script based on observations of user activity. An RPA script may use APIs to interface directly with relevant applications, an RPA script may interface with applications indirectly through, for example, keyboard and mouse control, or any other interface methods may be used.; [0048] In some embodiments, an output executable and/or binary file may call one or more external programs that may be used in executing an RPA script. For example, an RPA script may rely on a word processing application to perform one or more steps during execution. An RPA script (or an executable and/or binary file based on an RPA script) may open the dependent application to execute its function.).
Claims 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over Bond and Balakrishnan, in further view of Manohar (US 9,286,526 B1).
Regarding claim 24, Bond nor Balakrishnan teach the limitations of claim 24 but Manohar teaches wherein the validation user interface concurrently presents (i) at least one item of textual data of an image of a document in a visually distinguished manner (Col. 7, line 32 through Col. 8, line 12: Despite the advancements made in text recognition technology, errors may still exist in the processed works 106C generated by the architecture 100, and such errors must be manually corrected by the user 104 before a finalized work 106D may be produced. As a result, an example application of the present disclosure may provide one or more text images 118 of the processed work 106C for consideration by the user 104. Such text images 118 may include a plurality of edits made automatically by the application operable on the computing device 108. Some of the automatically-made edits may be accurate and, thus, accepted by the user 104 without further revision. Other automatically-made edits 120, on the other hand, may be of questionable accuracy, and such edits 120 may require manual correction by the user 104 via the user interface 114. For example, such edits 120 may include those forming a correctly-spelled word (e.g., a word passing through a spell check or other like correction engine), but having grammar, syntax, context, or other like issues requiring validation or manual correction from the user 104. As will be described in greater detail below, such edits 120 may be highlighted and/or otherwise identified to the user 104 as being of questionable accuracy such that the user 104 may provide either validation or a manual correction to the edit 120 in an efficient manner. It is understood that such manual corrections received from the user 104 may also comprise inputs 116 as described above.; Col. 12, lines 46-65), and (ii) extracted text corresponding to the at least one item of textual data, the extracted text being programmatically recognized from the image of the document (Col. 7, line 32 through Col. 8, line 12; Col. 12, lines 46-65: Fig. 4, shows extracted text “She placed the phone down on the table”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Manohar of validating data extracted from a document with the teachings of Bond and Balakrishnan of outsourcing jobs in an automated environment. The modification would have been motivated by the desire of ensuring extracted text is accurate.
Regarding claim 25, Manohar teaches wherein the validation user interface enables the user to validate the extracted data as correct for the at least one item of textual data of the image of the document (Col. 7, line 32 through Col. 8, line 12; Col. 12, lines 46-65).
Regarding claim 26, Manohar teaches wherein the validation user interface enables the user to correct the extracted data for the at least one item of textual data of the image of the document (Col. 7, line 32 through Col. 8, line 12; Col. 12, lines 46-65: In one example, upon reviewing the one or more automatically-made edits contained in the processed work 106C, the user 104 may notice that the recognition module 210 or the processing module 212 inserted the characters “cl” instead of the character “d,” and as a result, included the word “clown” instead of the word “down” into the processed work 106C. Such an exemplary edit 120 is sown in FIG. 1. The processing module 212 may identify this edit 120 as being of questionable accuracy, and upon review of the processed work 106C, the user 104 may manually replace the characters “cl” with the character “d.”.).
Response to Arguments
Applicant’s arguments with respect to claims 1-6, 8-12, and 14-29 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195