Prosecution Insights
Last updated: April 19, 2026
Application No. 18/193,809

Method for Controlling a Virtual Assistant for an Industrial Plant

Non-Final OA §103
Filed
Mar 31, 2023
Examiner
OPSASNICK, MICHAEL N
Art Unit
2658
Tech Center
2600 — Communications
Assignee
ABB Schweiz AG
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
737 granted / 900 resolved
+19.9% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
46 currently pending
Career history
946
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
29.9%
-10.1% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 900 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/10/2025 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,2,4-6,8-14 are rejected under 35 U.S.C. 103 as being unpatentable over Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in further view of Pillai et al (20170039036) As per claim 1, Bagley Jr et al (20200026757) teaches a method for controlling a virtual assistant for an industrial plant (as, using an intelligent industrial assistant – para 0005, 0006), comprising: receiving, by an input interface, an information request; wherein the information request comprises at least one request for receiving information about at least part of the industrial plant (as, processing natural language input for context/information, or accessing databases, regarding the industrial machine – para 0091; see also para 0101 --- “feature may be associated with at least one operation for a user (e.g., machine operator and/or the like) to interact with intelligent industrial assistant 102. For example, a feature may include a direction, e.g., a request (e.g., command, question, and/or the like) for intelligent industrial assistant 102 to perform an action. Additionally or alternatively, a feature may include a confirmation, e.g., an input (e.g., spoken/verbal input, click, key stroke, and/or the like) to intelligent industrial assistant 102 confirming that intelligent industrial assistant 102 should perform an action (e.g., “yes,” “no,” “cancel,” and/or the like). Additionally or alternatively, a feature may be a complex feature, e.g., a series of decision steps in which a user (e.g., machine operator and/or the like) provides multiple inputs (e.g., directions, confirmations, and/or the like) to intelligent industrial assistant” ); determining by a control unit a model specification using the received information request; providing the model specification to a model manager, determining by the model manager a For example, parameter data may be associated with a part number, a part identification, a machine number (e.g., of a particular industrial machine 104, a model of industrial machine 104, and/or the like), a machine identifier, a number, a category (e.g., low, medium, high, slow, fast, on, off, and/or the like), a word (e.g., name of a part, a machine, a database, an item of media, and/or the like), an alphanumeric string, and/or the like. ; Which is controlled by the intelligent industrial assistant – para 0101 – wherein the model identifier is an input to the intelligent industrial assistant – see para 0092; see also para 0091, which includes dialog control, task flow control, and choosing which output; along with managing the dialogue templates – para 0092; and providing by the control unit a response to the information request using the determined machine learning model (as, returning the requested information – from para 0101: “Warm up the machine,” and/or the like may be alternative items of expected dialogue to initiate a warm-up process for an industrial machine 104; “Run process 1234,” “Start m-code 1234,” and/or the like may be alternative items of expected dialogue to initiate a process associated with the stated code; “Report current tool,” “Inform about current tool,” and/or the like may be alternative items of expected dialogue to request information on a current tool; “Turn on the lights,” “Lights on,” and/or the like may be alternative items of expected dialogue to request turning on the lights; and/or the like). In some non-limiting embodiments, expected dialogue data may include initiating dialogue data associated with at least one natural language input (e.g., phrase and/or the like) for initiating the sequence associated with the expected dialogue data. . As per claim 1, Bagley Jr et al (20200026757) also teaches by checking, using the model specification, whether a suitable Bagley Jr et al (20200026757) , wherein the assistant accesses databases storing said information – para 0097, accessing stored information, which can be from a third-party vendor –para 0097, and para 0098 – “..may store information and/or software related to the operation and use…”). Bagley Jr et al (20200026757) further teaches an intelligent industrial assistant (para 0006) incorporating dialogue templates trained and updated (para 0028, 0032), to be interfaced with the machine information via API calls and DLL’s (para 0066), to control the machines – para 0092, 0093, using existing remote databases – end of para 0093; however, Bagley Jr et al (20200026757) does not explicitly teach the use of machine learning models as part of the intelligent industrial assistant. Dhakshinamoorthy et al (20190107827) teaches the use of machine learning in a networked automation environment in IoT structure (para 0022). Therefore, it would have been obvious to one of ordinary skill in the art of networked digital assistant technologies for industrial machines to enhance the system of Bagley Jr et al (20200026757) with machine learning because it would advantageously improve the quality control and efficiency of the industrial plant (Dhakshinamoorthy et al (20190107827), end of para 0022). The combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) teaches mapping of user request to the specifications of the product (see mapping above, to Bagley Jr et al), and the implementation of machine learning models to process this information – see Dhakshinamoorthy et al, as mapped above. The combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) teaches the use of machine learning models for the implementation; as to the claim features toward checking if the model ha suitable model parameters related to the technical requirements, Pillai et al (20170039036) teaches the matching of text analytics of the user input, to subject and features of the product specification model (para 0036). Therefore, it would have been obvious to one of ordinary skill in the art of model parameter matching, to modify the learning models of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) with matching of text analytics model parameters with model parameters of the product features/specifications, as taught by Pillai et al (20170039036) because it would advantageously produce more accurate matches between the user input and the available instruments that could perform the user’s needs ( Pillai et al (20170039036) , para 0024). As per claim 2, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, further comprising: identifying by the control unit an information intent using the received information request; and determining the model specification using the information intent (Bagley Jr et al (20200026757) , identifying information intent by a particular machine -- see para 0091 – “inferred intent in the context of the industrial machine, and the assistant returning a proper response in the determined intended task flow). As per claim 4, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, wherein, when it is determined that a suitable machine learning model is stored in the model database, the method further comprises determining the response to the information request by using the stored machine learning model (Bagley Jr et al (20200026757) , para 0098, access databases for the model – first 11 lines, reflecting back on para 0097, last 12 lines). As per claim 5, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, wherein determining the response to the information request comprises: providing model input by the control unit, wherein the model input is determined by using the information intent; and determining the response by inputting the model input into the machine learning model (Dhakshinamoorthy et al (20190107827) teaches deriving intent/content based on semantic analysis – para 0029). As per claim 6, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, wherein, when it is determined that no suitable machine learning model is stored in the model database, the method further comprises providing a delay response to a user (Dhakshinamoorthy et al (20190107827) teaches providing real-time information when possible – para 0063, 0064, but then a delay of a timeout when the information is not forthcoming – para 0070, para 0077). As per claim 8, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, wherein identifying an information intent comprises translating the information request into a machine understandable format (Bagley Jr et al (20200026757) , para 0017, taking user natural language input, and then translating to machine understandable commands – para 0019). As per claim 9, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, wherein providing a response comprises translating the determined response into a user understandable format (Bagley Jr et al (20200026757), displaying to the user, in a user understandable format – para 0100: such content may be displayed (e.g., in a dialogue window, in a separate window on the display screen, and/or the like). In some non-limiting embodiments, number values with multiple digits following a decimal point may be rounded to a selected (e.g., predetermined, selectable, and/or the like) number of digits after the decimal before being output (e.g., as audible speech, text in a dialogue window, text on an HTML page, and/or the like). In some non-limiting embodiments, when content includes at least one media item, such media item may be displayed in a separate window (e.g., on a display screen and/or the like). Additionally or alternatively, large media items (e.g., greater than a threshold number of pages (e.g., for documents), seconds (e.g., for audio, visual, or audiovisual files), and/or the like) may be divided (e.g., segmented and/or the like) into smaller media items, which may reduce load times. Additionally or alternatively, such smaller media items may be displayed serially, concurrently, and/or the like. As per claim 10, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, further comprising: determining, by a session manger, user action information relating to tracked actions and context information of an individual user (Bagley Jr et al (20200026757), as a dialogue manager, first 9 lines, reflecting back on pp 0068) ; determining, by the session manager, individual user information using the received user action information; requesting, by a state manager, service landscape information based on the individual user information (as, application manager determining and gathering the necessary modules to develop a package for the user – para 0099); determining, by the state manager, global information using the received service landscape information; and determining, by the session manager, a response for the user using the global information (Bagley Jr et al (20200026757) as, establishing a vocabulary/package not only for a certain machine, but applicable across many platforms, for operations manager, production manager, engineer, maintenance worker, salesperson, etc. – para 0103, last 28 lines). As per claim 11, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, wherein determining individual user information comprises tracking actions and context information of an individual user (Bagley Jr et al (20200026757) , as tracking user information – see para 0101, wherein a certain dialogue is expected/anticipated – “sequence data may be associated with (e.g., identify, include, and/or the like) a sequence of expected dialogue by the user (e.g., machine operator and/or the like), by intelligent industrial assistant 102, and/or the like. For example, sequence data may be associated with (e.g., identify, include, and/or the like) at least one item of expected dialogue data.” . As per claim 12, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, further comprising: collecting user actions over a period of time; analyzing the user actions, thereby recognizing patters of user actions; associating information intent with the recognized patterns; and predicting the information intent of an information request using the recognized patterns ( Bagley Jr et al (20200026757), para 0093, showing tracking user preferences and overlap with other systems – first 18 lines). As per claim 13, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 12, wherein the step of analyzing the user actions is repeated in a predetermined frequency ( Dhakshinamoorthy et al (20190107827) -- tracking number of times of a user action, the user action being, trying to access via query – para 0095. As per claim 14, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches the method of claim 1, further comprising: receiving semantic information for a desired information intent; and determining the information intent using the received semantic information (Dhakshinamoorthy et al (20190107827) , taking user input and performing natural language and semantic recognition – para 0024). Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827), in view of Pillai et al (20170039036), in further view of Edgar (20210201190). As per claim 7, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) teaches the method of claim 6, as mapped above; furthermore, the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) teaches machine learning models, but does not detail model quality (the disclosed models are built to be successful, but does not detail how the ‘quality models’ are quality); Edgar (20210201190) teaches AI personal assistant’s built by machine learning (para 0002), wherein the AI assistants models via machine learning perform supervised training so that the training data is accurate so that the diagnostic model is accurate (ie, good enough), as well (para 0038). Edgar (20210201190) further teaches performance accuracy measurements of the model – para 0044, and keeps updating the training data, and training, until a certain confidence measure is achieved – para 0047; and showing the results (approved/not approved – Fig.1, subblocks 122, 124,126. (These sections of Edgar (20210201190) are mapped to, “determining, by an autoML pipeline, a machine learning model candidate using the information intent; testing a model quality of the machine learning model candidate; determining the machine learning model using the machine learning model candidate, when the model quality is acceptable; and informing the user of unsuccessful model generation when the model quality is not acceptable.). Therefore, it would have been obvious to one of ordinary skill in the art of AI assistant machine modeling to modify the machine learning models in the combination of Bagley Jr et al (20200026757) in view of Dhakshinamoorthy et al (20190107827) in view of Pillai et al (20170039036) with an accuracy measure of the machine learning model, and notification of the accuracy results, as taught by Edgar (20210201190), because it would advantageously allow the user to specify an accuracy result that is required by regulations (see Edgar (20210201190), para 0030). Response to Arguments Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Examiner notes the introduction of the Pillai et al (20170039036) reference to teach the concept of matching modeled input requests with modeled features/specification parameters of the technical specifications. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see related art listed on the PTO-892 form. Furthermore, the following references were found, that parallel applicants disclosure Zollner et al (20220001586) teaches matching modeled product parameters with models product specifications (para 0044). Kong et al (20210174095) teaches multidevice network, including appliances, with machine learning models (abstract), and equating mutlistory buildings with applications in industrial settings (para 0030) Paulik et al (20180330737) teaches machine learned virtual assistants (para 0029) deriving intent para 0229 with accuracy measurements (para 0244). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Opsasnick, telephone number (571)272-7623, who is available Monday-Friday, 9am-5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Richemond Dorvil, can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /Michael N Opsasnick/Primary Examiner, Art Unit 2658 01/22/2026
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Mar 06, 2025
Non-Final Rejection — §103
Jun 12, 2025
Response Filed
Sep 04, 2025
Final Rejection — §103
Nov 10, 2025
Response after Non-Final Action
Dec 08, 2025
Request for Continued Examination
Jan 05, 2026
Response after Non-Final Action
Jan 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602554
SYSTEMS AND METHODS FOR PRODUCING RELIABLE TRANSLATION IN NEAR REAL-TIME
2y 5m to grant Granted Apr 14, 2026
Patent 12592246
SYSTEM AND METHOD FOR EXTRACTING HIDDEN CUES IN INTERACTIVE COMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12586580
System For Recognizing and Responding to Environmental Noises
2y 5m to grant Granted Mar 24, 2026
Patent 12579995
Automatic Speech Recognition Accuracy With Multimodal Embeddings Search
2y 5m to grant Granted Mar 17, 2026
Patent 12567432
VOICE SIGNAL ESTIMATION METHOD AND APPARATUS USING ATTENTION MECHANISM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
92%
With Interview (+10.5%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 900 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month