DETAILED ACTION
Claims 1-20 are pending in the current application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Remarks, filed 11/21/25, with respect to the rejection of claim 1 under 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Pierce et al. (Pub. No. US 2024/0103852 A1) [0057] lines 1-3,9-20, [0058] lines 1-6, [0067] lines 4-22, [0068] lines 1-11 that shows for the generation of a project associated with automation can include receiving from the user design input information that can include a plurality of information such as control logic, structed text, sequential function charts and design diagram information and flow diagram information, viewed as type of automation structure and input and output/goal information provided/selected by the user to create the automation project where the user input can be received by the user interface in any suitable format, viewed as a type of multi-model as can receive, visual, audio, tactile, etc. where the project is able to provide feedback, interactive contextual guidance, back to the developer in response to selection/provided input information and used this information in project generation seen as the generation/synthesizing of code associated with the automation project and Allen et al. (Pub. No. US 2009/0119587 A1) [0007] lines 1-4, [0016] lines 1-12, [0017] 1-13, [0052] lines 1-12, [0053] lines 1-6, [0054] lines 1-5, [0059] lines 1-19, [0060] lines 1-4 [0079] lines 1-10 and [0089] lines 1-8 which shows that in the learning how to create automation is based on spoken instructions for the task, viewed as a type of task automation structure, that is modified/combined with demonstrated actions, viewed as teaching demonstrations that can track/monitoring and recording actions associated with automation task, viewed as type of automation parameters, and the associated object, including state/selection, the user interacts with in the gui and is able to provide feedback based on associated intent to clarify intent, viewed as type of contextual feedback, that are relevant/associated with the specific item/element/object and used to generate the automated task and thus together viewed as showing how synthesizing a UI task automation program from one or more teaching demonstrations.
Applicant's arguments filed 11/21/25 have been fully considered but they are not persuasive.
Applicant argues that (Argument 1; Remarks pg. 8 lines 29-31) that the amendment to claim 4 overcomes the 112b rejection of the claim
With respect to applicant’s argument examiner respectfully disagrees. As previously stated it was the dependency chain for claim 4 that needed to be modified to include claim 2 for proper antecedent bases of terms as the dependency chain for dependent claim 4 depended on claim 3 that depended on claim 1 previously but it was dependent claim 2 that has the necessary correct antecedent basis for the one or more additional teaching demonstrations, and while the amendment to claim 4 to now depend on claim 2 does fix the antecedent basis for that term by removing its dependency from claim 3 it creates new antecedent basis issues for the term the at least one logical abstract representation. It claim 4 depended on claim 3 and claim 3 was amended to depend on claim 2 that would fix these antecedent issues.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 11 and 16 recite and lines 10-14, 13-17 and 13-17 respectively “generating interactive contextual guidance presented to the user for selecting a User Interface (UI) element of the one or more teachings demonstrations to record conditional execution of one or more automation actions or Boolean expressions on the UI elements based on states of one or more UI elements of the one or more teachings demonstrations” which is unclear as to what the generating interactive contextual guidance is for and what the states of one or more UI elements of the teaching demonstrations is used for as the basis of what decision, is the guidance for selecting a UI element of a teaching demonstrations where the selected UI element is one that records conditional execution of one or more automation action or Boolean expressions and the UI element is selected based on the states of one or more UI elements of the teaching demonstration or is the contextual guidance for a selected UI element, where the guidance is for providing conditional execution of one or automation actions or Boolean expression on the UI element, where the guidance in generated/determined based on state of one or more UI elements of the teaching demonstrations as it is interpreted. Appropriate clarification is required.
Claims 2-10, 12-15 and 17-20 depend from claims 1, 11 and 16 above and do not correct the issue appropriate clarification is requested.
Claim 4 additionally recites the limitation of “the at least one logical abstract representation” in lines 2-3. There is insufficient antecedent basis for this limitation in the claim as the claim has dependency on claim 2 however as stated above it is claim 3 that has the antecedent basis for this term and it claim 3 was amended to depend from dependent claim 2 the elements of the claim would have correct antecedent basis.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 9, 11-12 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Pierce et al. (Pub. No. US 2024/0103852 A1), in view of Allen et al. (Pub. No. US 2009/0119587 A1) and further in view of Lachenmayr et al. (Pub. No. US 2024/0025034 A1).
As to claims 1 and 16, Pierce discloses a method comprising: receiving an automation structure, and one or more inputs and outputs of the automation structure to create a task automation wherein the automation structure and the one or more inputs and outputs are selected by a user to create the task automation (Pierce [0057] lines 1-3 and 9-20, [0058] lines 1-6, [0067] lines 4-22, [0068] lines 1-11; which shows for the generation of a project associated with automation can include receiving from the user design input information that can include a plurality of information such as control logic, structed text, sequential function charts and design diagram information and flow diagram information, viewed as type of automation structure and input and output/goal information provided/selected by the user to create the automation project );
providing multi-modal interfaces to receive an input from the user (Pierce [0057] lines 1-3 and 9-20 and [0058] lines 1-6; the user input can be received by the user interface in any suitable format, viewed as a type of multi-model interface as can receive, visual, audio, tactile, etc. and use that input in creating project including automation);
generating interactive contextual guidance presented to the user (Pierce [0057] lines 1-3 and 9-20, [0058] lines 1-6, [0067] lines 4-22, [0068] lines 1-11 and [0073] lines 1-18; which shows as part of the generation of a project associated with automation can include receiving from the user design input information viewed as type of automation structure and input and output/goal information provided/selected by the user to create the automation project where based on the user selected input can generate and provide back to the user feedback, interactive contextual guidance, back to the developer in response to selection/provided input information to aid in development of the project)
synthesizing a UI task automation program from the one or more teaching demonstrations for task automation (Pierce [0057] lines 1-3 and 9-20, [0058] lines 1-6, [0067] lines 4-22, [0068] lines 1-11; which shows being able to generate/synthesize a project code associated with automation based on user provided input and feedback associated with the automation task, where the specifics of as part of generating UI automation task using a teaching demonstration is seen in the teachings of Allen below and thus together would be viewed as showing the specifics of synthesizing a UI task automation program from the one or more teaching demonstrations for task automation);
.
Pierce does not specifically disclose process one or more teachings demonstrations to update the automation structure of the task automation, wherein the one or more teachings demonstrations identify and record automation processing parameters and operations of the one or more teachings demonstrations for the task automation; generating interactive contextual guidance presented to the user for selecting a User Interface (UI) element of the one or more teachings demonstrations to record conditional execution of one or more automation actions or Boolean expressions on the UI elements based on states of one or more UI elements of the one or more teachings demonstrations; recording based on the conditional execution of one or more of the automation actions or Boolean expressions on at least one UI element, the one or more teaching demonstrations.
However, Allen disclose process one or more teachings demonstrations to update the automation structure of the task automation, wherein the one or more teachings demonstrations identify and record automation processing parameters and operations of the one or more teachings demonstrations for the task automation (Allen [0007] lines 1-4, [0016] lines 1-12, [0017] 1-13, [0052] lines 1-12, [0053] lines 1-6, [0054] lines 1-5, [0059] lines 1-19, [0060] lines 1-4 and [0079] lines 1-10; which shows that in the learning how to create automation it based on spoken instructions for the task, viewed as a type of task automation structure, that is modified/combined with demonstrated actions, viewed as teaching demonstrations that are track/monitored and recording actions associated with action demonstrated for automation task, where information track/monitored/recording include actions taken and associated with objects what type action taken viewed as type of automation parameters and automated operation, and the associated object, including state/selection/actuation of specific objects, the user interacts with in the gui);
generating interactive contextual guidance presented to the user for selecting a User Interface (UI) element of the one or more teachings demonstrations to record conditional execution of one or more automation actions or Boolean expressions on the UI elements based on states of one or more UI elements of the one or more teachings demonstrations (Allen [0007] lines 1-4, [0016] lines 1-12, [0017] 1-13, [0052] lines 1-12, [0053] lines 1-6, [0054] lines 1-5, [0060] lines 1-4, [0079] lines 1-10, [0089] lines 1-8; which shows that learning and generating automation task based on different input including specific demonstrations, teaching demonstrations, that are track/monitored and recording actions associated with teaching demonstration of automation task, where the system can monitor and identify object that user interacts with and observes what they do where the intent recognition module and based on user interaction/selection of a specific object and demonstrations and provided intent feedback, viewed as type of interactive guidance feedback based on state of UI and associated elements/objects of the teaching demonstration associated with the specific element/object part of the demonstrations such as examples of user put title here or tracking mouse for expected output guidance for clarification of action as part of automation to be clarified where this information is all monitored/tracked and recorded, that in light of the contextual feedback of Pierce above that can provide specific action/changes can together be viewed as disclosing generating interactive contextual guidance presented to the user for selecting a User Interface (UI) element of the one or more teachings demonstrations to record conditional execution of one or more automation actions or Boolean expressions on the UI elements based on states of one or more UI elements of the one or more teachings demonstrations)
recording based on the conditional execution of one or more of the automation actions or Boolean expressions on at least one UI element, the one or more teaching demonstrations (Allen [0007] lines 1-4, [0016] lines 1-12, [0017] 1-13, [0052] lines 1-12 and [0053] lines 1-6; which shows that the demonstrated users actions, conditional execution of one or more automation actions as part of the teaching demonstration associated with the UI, are tracked/recorded );
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Allen showing the specifics of providing teaching demonstrations for generating automation task, into the generation of automation task of Pierce for the purpose of increasing the adaptability of the system from the ability to learn and create automation based on the determined users intent, as taught by Allen [0016] lines 9-17 and [0017[ lines 1-13.
Pierce as modified by Allen do not specifically disclose presenting a visual program representation of the UI task automation program to the user for validation.
However, Lachenmayr discloses presenting a visual program representation of the UI task automation program to the user for validation (Lachenmayr [0005] lines 1-5, [0134] lines 1-8; which shows being able to visually check and testing the resulting program, viewed as a type of RPA automation type program UI task automation program).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Lachenmary showing the specifics of providing a visual representation of generated robot program association with automation, into the automation generation of Pierce as modified by Allen for the purpose of increasing consistency by being able to perform verification checks on the generated automation and thus determine if issues with the generated automation before used, as taught by Lachenmayr [0134] lines 1-8.
As to claims 2 and 17, Pierce disclose generating user guidance and providing the multi-modal interfaces to process one or more additional teaching demonstrations (Pierce [0057] lines 1-3 and 9-20, [0058] lines 1-6, [0067] lines 4-22, [0068] lines 1-11; which shows the multi modal interface that can process user input in any suitable format and based on provided project design input information can provide/generate guidance/suggesting, that in light of the teaching demonstrations of Allen above seen as a plurality and not limited to one can together disclose the specifics of generating user guidance and providing the multi-modal interfaces to process one or more additional teaching demonstrations).
Pierce as modified by Allen do not specifically disclose further comprising: analyzing the UI task automation program.
However, Lachenmayr discloses further comprising: analyzing the UI task automation program (Lachenmayr [0005] lines 1-5, [0134] lines 1-8; which shows being able to visual check, viewed as a type of analysis, of the generated robot/automation program that can be a GUI automation program).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Lachenmary showing the specifics of providing a visual representation of generated robot program association with automation, into the automation generation of Pierce as modified by Allen for the purpose of increasing consistency by being able to perform verification checks on the generated automation and thus determine if issues with the generated automation before used, as taught by Lachenmayr [0134] lines 1-8.
As to claim 9, Pierce does not specifically disclose, however, Allen discloses wherein recording, based on the conditional execution of the one or more automation actions or Boolean expressions on at least one UI element, the one or more teaching demonstrations further comprises automatically detecting at least one program parameter from the one or more teaching demonstrations, and recording the at least one program parameter with the one or more teaching demonstrations (Allen [0007] lines 1-4, [0016] lines 1-12, [0017] 1-13, [0020] lines 1-11, [0052] lines 1-12 and [0053] lines 1-6; which shows that the demonstrated users actions, conditional execution of one or more automation actions as part of the teaching demonstration associated with the UI, are tracked/recorded and as part of user actions/teaching demonstrations it is able to determine task input parameter/program parameter);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Allen showing the specifics of providing teaching demonstrations for generating automation task, into the generation of automation task of Pierce for the purpose of increasing the adaptability of the system from the ability to learn and create automation based on the determined users intent, as taught by Allen [0016] lines 9-17 and [0017[ lines 1-13 .
As to claim 11, Pierce discloses system, comprising: one or more computer processors (Pierce [0056] lines 1-6); and
a memory containing a program which when executed by the one or more computer processors performs an operation, the operation comprising (Pierce [0056] lines 1-15)
The remaining elements of the claim are comparable to claim 1 above and rejected under the same reasoning.
As to claim 12, it is comparable to claim 2 above and rejected under the same reasoning
Claims 3, 7, 10, 13-15 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pierce, Allen and Lachenmayr as applied to claims 1, 11 and 16 above, and further in view of Riva et al. (Etna: Harvesting Action Graphs from Websites).
As to claims 3, 13 and 18, Pierce as modified by Allen and Lachenmayr discloses further comprising: translating the one or more teaching demonstrations into at least one logical abstract representation of the task automation; and wherein presenting the visual program representation of the UI task automation program is based on the at least one logical abstract representation.
However, Riva discloses further comprising: translating the one or more teaching demonstrations into at least one logical abstract representation of the task automation; and wherein presenting the visual program representation of the UI task automation program is based on the at least one logical abstract representation (Riva pg. 314 Section 2.1 para 1 lines 1-6 and pg. 316 Fig. 2 lines 1-6; which shows the converting/translations of the demonstrations into traces that are converted into an action state model visualized as an action graph, viewed as a logical abstract representation of the demonstrations and visualize it as an action graph that can be associated with an UI automation task).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Riva showing the specifics of generating a model abstract representation associated with an automation task, into the automation generation of Pierce as modified by Allen and Lachenmayr for the purpose of being able to design a more robust automation, as taught by Riva pg. 314 Section 2.1 para 2 lines 8-15.
As to claim 4, Pierce as modified by Allen and Lachenmayr do not specifically disclose, however, Riva discloses wherein translating the one or more additional teaching demonstrations into the at least one logical abstract representation of the task automation further comprises combining multiple automation actions into one logical action in the at least one logical abstract representation of the task automation to present for user understanding (Riva pg. 313 Col. 2 lines 9-19, pg. 314 Section 2.1 para. 1 lines 1-6 and pg. 316 Fig. 2 lines 1-6; which shows that the generated model/logical abstraction through converting/translating task into an action state model and action graph of the automation can merge a plurality of states into a single state into one state in the model/abstract representation of the automation).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Riva showing the specifics of generating a model abstract representation associated with an automation task, into the automation generation of Pierce as modified by Allen and Lachenmayr for the purpose of being able to design a more robust automation, as taught by Riva pg. 314 Section 2.1 para 2 lines 8-15.
As to claims 7, 15 and 20, Pierce as modified by Allen and Lachenmayr do not specifically disclose, however, Riva disclose wherein presenting the visual program representation of the UI task automation program further comprises providing at least one interactive tool with the visual program representation to enable user understanding and validating the UI task automation program (Riva pg. 314 Section 2.1 para 1 lines 1-6, pg. 316 Fig. 2 lines 1-6 and pg. 320 Fig. 8 lines 1-6; which shows as part of the display of the action graph, viewed as a visual presentation of the associated automation includes a graph edition tool/window that allows for selection of element of the representation to provide additional details for that elements, viewed as enable user understanding of that element, where in light of the teachings of Lachenmayr above showing the visual presentation of the automation allows for check/validation of the generated automations and thus together would be viewed as showing wherein presenting the visual program representation of the UI task automation program further comprises providing at least one interactive tool with the visual program representation to enable user understanding and validating the UI task automation program).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Riva showing the specifics of generating a model abstract representation associated with an automation task, into the automation generation of Pierce as modified by Allen and Lachenmayr for the purpose of being able to design a more robust automation, as taught by Riva pg. 314 Section 2.1 para 2 lines 8-15.
As to claims 10, 14 and 19, Pierce as modified by Allen and Lachenmayr do not specifically disclose, however, Riva discloses wherein presenting the visual program representation of the UI task automation program further comprises performing the UI task automation program to display a sequence of UI operations and screens to validate behavior of the UI task automation program (Riva pg. 313 Col. 2 lines 9-19, pg. 314 Section 2.1 para. 1 lines 1-6 and pg. 316 Fig. 2 lines 1-6 and pg. 324 Col. 1 lines 4-10; which shows the visual presentation of the automation is a model action graph that is used to validate by capturing current UI snapshot and compare to model state information for snapshot state similarity viewed as a type of display a sequence of UI operation and screens/shots to validate the behavior of the UI task automation program).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Riva showing the specifics of generating a model abstract representation associated with an automation task, into the automation generation of Pierce as modified by Allen and Lachenmayr for the purpose of being able to design a more robust automation, as taught by Riva pg. 314 Section 2.1 para 2 lines 8-15.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Pierce, Allen and Lachenmayr as applied to claim 2 above, and further in view of Riva et al. (Etna: Harvesting Action Graphs from Websites).
As to claim 4, Pierce as modified by Allen and Lachenmayr do not specifically disclose, wherein translating the one or more additional teaching demonstrations into the at least one logical abstract representation of the task automation further comprises combining multiple automation actions into one logical action in the at least one logical abstract representation of the task automation to present for user understanding
However, Riva discloses wherein translating the one or more additional teaching demonstrations into the at least one logical abstract representation of the task automation further comprises combining multiple automation actions into one logical action in the at least one logical abstract representation of the task automation to present for user understanding (Riva pg. 313 Col. 2 lines 9-19, pg. 314 Section 2.1 para. 1 lines 1-6 and pg. 316 Fig. 2 lines 1-6; which shows that the generated model/logical abstraction through converting/translating task into an action state model and action graph of the automation can merge a plurality of states into a single state into one state in the model/abstract representation of the automation).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Riva showing the specifics of generating a model abstract representation associated with an automation task, into the automation generation of Pierce as modified by Allen and Lachenmayr for the purpose of being able to design a more robust automation, as taught by Riva pg. 314 Section 2.1 para 2 lines 8-15.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Pierce, Allen and Lachenmayr as applied to claims 1, above, and further in Khaladkar et al. (Pub. No. US 2009/0077422 A1)
As to claim 5 Pierce as modified by Allen and Lachenmayr do not specifically disclose wherein generating the interactive contextual guidance further comprises generating the interactive contextual guidance to enable the user to select at least one automation action or expression on at least one UI element.
However, Khaladkar discloses wherein generating the interactive contextual guidance further comprises generating the interactive contextual guidance to enable the user to select at least one automation action or expression on at least one UI element (Khaladkar [0018] lines 3-17; which shows being able to provide user selection for a UI element and then based on the UI element selected selection the operation performed on the UI element, that in light of the teaching of Pierce and Allen above showing the ability generate interactive contextual guidance for an operation associated with UI objects that can determine/track/record state/action taken on the object as part of automation can together be viewed as showing generating the interactive contextual guidance further comprises generating the interactive contextual guidance to enable the user to select at least one automation action or expression on at least one UI element).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Khaladkar showing the specifics of providing guidance to select specific action to take on a UI element, into the interactive automation generation system of Pierce as modified by Allen and Lachenmayr, for the purpose of reduce the tediousness and complexity of generating possible test for used in testing automation by providing user with selection to choose from, as taught by Khaladkar [0003] lines 5-15..
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Pierce, Allen and Lachenmayr as applied to claim 1 above, and further in view of Senthamaraikannan (Pub. No. US 2023/0123538 A1)
As to claim 6, Pierce as modified by Allen and Lachenmayr do not specifically disclose wherein receiving the automation structure further comprises prompting, and presenting graphical visual metaphors, to receive user selected definitions for the automation structure, and the one or more inputs and outputs of the automation structure to create the task automation.
However, Senthamaraikannan discloses wherein receiving the automation structure further comprises prompting, and presenting graphical visual metaphors, to receive user selected definitions for the automation structure, and the one or more inputs and outputs of the automation structure to create the task automation (Senthamaraikannan [0009] lines 4-18, [0089] lines 1-3, [009] lines 1-5 and [0094] lines 1-5; which shows providing/prompting a user to select a template of the automation to use, viewed as a type of presenting graphical visual metaphor for the automations, where the specification of the automation design are seen in the teachings of Pierce above).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Senthamaraikannan showing the specifics of generating automation based on provided automation templates, into the generated automation of Pierce as modified by Allen and Lachenmayr for the purpose of reducing difficulties in designing automation by managing specific parameters associated with the automation, as taught by Senthamaraikannan [0006] lines 1-10 and [0009] lines 4-18.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Pierce, Allen and Lachenmayr as applied to claim 1 above, and further in view of Mutagi et al. (Patent No. US 11,100,922 B1)
As to claim 8, Pierce as modified by Allen and Lachenmayr do not specifically disclose, wherein presenting the visual program representation of the UI task automation program further comprises enabling a user to accept and store the UI task automation program.
However, Mutagi discloses wherein presenting the visual program representation of the UI task automation program further comprises enabling a user to accept and store the UI task automation program (Mutagi Col. 34 lines 23-30 and line 48- Col. 35 line 13 and Col. 37 lines 3-13 and Fig. 3n; which shows for an automation being able to store the automation, and being able to display a preview visualization of the generated automation and provide an option for a user to confirm/accept the automation, which in light of the teachings of Pierce and Allen above can be viewed as showing the specifics of the generated UI task automations ).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Mutagi showing the specifics of storing and providing a user confirmation for generated automation, into the generated automation of Pierce as modified by Allen and Lachenmayr for the purpose of increasing user control over the generated automation so that can help make sure the generated automation is what the user desired, as taught by Mutagi Col. 34 lines 23-30 and line 48- Col. 35 line 13 and Col. 37 lines 3-13.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRADFORD F WHEATON whose telephone number is (571)270-1779. The examiner can normally be reached Monday-Friday 8:00-5:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRADFORD F WHEATON/Examiner, Art Unit 2193