Prosecution Insights
Last updated: April 19, 2026
Application No. 17/868,694

VIRTUAL ASSISTANT ARCHITECTURE WITH ENHANCED QUERIES AND CONTEXT-SPECIFIC RESULTS FOR SEMICONDUCTOR-MANUFACTURING EQUIPMENT

Non-Final OA §103
Filed
Jul 19, 2022
Examiner
SPOONER, LAMONT M
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Lavorro Inc.
OA Round
4 (Non-Final)
74%
Grant Probability
Favorable
4-5
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
445 granted / 603 resolved
+11.8% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
22 currently pending
Career history
625
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 603 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to arguments Applicant’s arguments, see remarks, filed 9/4/2025, with respect to the rejection(s) of the pending claim(s) under 35 USC 103 have been fully considered and are persuasive, although the Examiner has reviewed the provisional applications and notes the similarities with respect to the applicant’s claims, for purposes of expediting the prosecution of the application, the Examiner cites Cella, 2020/0348662, which explicitly details similar functionality of Cella 2023/0176550, and clearly teaches predicting an action (as applied to his “wafer handling system”, as seen in paragraphs [01807, 01808], which comprises multiple sensors/components, for optimization, wherein an action is predicted on a processing component, utilizing information obtained from one or more of the other processing components, wherein the action optimizes operations of the processing component, see the rejection below). Therefore, the previous rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of the previously cited prior art (omitting Cella 2023/0176550) and further in view of at least Cella et al. (Cella, 2020/0348662). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 5, 7-18, 20-22, and 24 are rejected under are rejected under 35 U.S.C. 103 as being unpatentable over Lin (US 20050004780) in view of DeLuca(US 20210158174) and further in view of Cella et al. (Cella, 2020/0348662. Regarding claim 1, Lin teaches: A system comprising: a wafer handling system configured to hold one or more wafers for processing; processing components configured to physically treat the one or more wafers; a controller configured to operate the processing components; [0015] The following description provides a new and unique virtual expert system to assist engineers in trouble shooting and maintaining semiconductor tools. [0003] Integrated circuits are produced by multiple processes in a wafer fabrication facility (fab)… Each process requires very precise control of numerous process parameters. [0004]… a robotic system to transfer wafers from chamber to chamber… a virtual assistant, in communication with the processing components [and a natural language processing (NLP) engine], configured to: [receive a user query from a user] display information related to one or more of: repair, maintenance, or usage of a processing component of the processing components, [0009] The assistant system includes a first interface for receiving tool alarms from a plurality of different semiconductor tools connected via servos and a database including a plurality of problem trees, a plurality of cause trees, and a plurality of action trees. [0034] FIG. 4e is an interface for the SOP guide. In this interface, the system 5 shows the SOP guide for the current maintenance action. A user can follow the SOP guide to work on maintenance step-by-step. monitor operations of the processing components during physically treating the one or more wafers; [0010] … The method includes the steps of receiving a tool alarm when a tool problem occurs … [0331] Embodiments of the methods and systems disclosed herein may include adaptive scheduling techniques for continuous monitoring… Deluca teaches the following additional limitations: [a virtual assistant, in communication with the processing components and] a natural language processing (NLP) engine, configured to: receive a user query from a user [related to one or more of: repair, maintenance, or usage of a processing component of the processing components,] [0022] According to at least one embodiment, a field technician may query the EMA program. The EMA program may utilize a search engine to parse through the knowledge base to retrieve the applicable answer to the query of the field technician. [0051] Next, at 304, knowledge base data is retrieved from the knowledge base 210. The EMA program may utilize a search module to parse through the knowledge base 210 to retrieve the knowledge base data (e.g., data in the knowledge base 210 that may be associated with the physical asset, or asset data) associated with the query submitted by the field technician. The search module may utilize an analyzer to use natural language processing (NLP) techniques… Understand an intent or context of the user query, [0051]… As a result, the analyzer may interpret the context and meaning for the words, phrases and/or sentences parsed by the textual data… predict an action to be executed on the processing component [using information obtained from one or more of the other processing components, wherein the action optimizes operations of the processing component] according to the intent or context of the user query, wherein the action provides for one or more of: repair, maintenance, or usage of the one or more processing components; and [0014] As such, a knowledge base may be trained for personal assistants associated with field service technicians (e.g., IBM EMA offering) based on the resources that are available and updated over time through digital twin representation of the physical asset. [0022] According to at least one embodiment, a field technician may query the EMA program. The EMA program may utilize a search engine to parse through the knowledge base to retrieve the applicable answer to the query of the field technician... [0032] … The change data may be generated by IoT sensor readings associated with the asset, or information provided by a field technician associated with maintenance performed, work orders completed, parts replaced and/or removed, artificial intelligence (AI) predictions, and/or any failure descriptions that is uploaded onto the EMA program 110a, 110b… [0080] Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service… provide a context-specific response to the user query that contains information related to the predicted action, wherein operations of the processing components are optimized based on the context-specific response. [0039] In at least one embodiment, when the digital twin is updated, the EMA program 110a, 110b may answer more contextual details associated with the asset and the usage of the asset (i.e., asset usage). [0051]… The search module may utilize an analyzer to use natural language processing (NLP) techniques… As a result, the analyzer may interpret the context and meaning for the words, phrases and/or sentences parsed by the textual data. In at least one embodiment, the analyzer may utilize key words included in the query provided by the field technician to match with the corpus of available information included in knowledge base 210. Lin and Deluca pertain to virtual assistants, semiconductor manufacturing and equipment it would have been obvious to modify Lin by including Natural Language Processing, as described in Deluca, in order to provide contextual answers to service technicians. This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Lin with Deluca lack explicitly teaching that which Cella (paragraphs [0130, 0647, 0664, 1806, 1807, 4814] teaches, predict an action to be executed on the processing component using information obtained from one or more of the other processing components, wherein the action optimizes operations of the processing component according to the intent or context of the user query, wherein the action provides for one or more of: repair, maintenance, or usage of the one or more processing components; “[0647] Additional details are provided below in connection with the methods, systems, devices, and components depicted in connection with FIGS. 1 through 6. In embodiments, methods and systems are disclosed herein for cloud-based, machine pattern recognition based on fusion of remote, analog industrial sensors. For example, data streams from vibration, pressure, temperature, accelerometer, magnetic, electrical field, and other analog sensors may be multiplexed or otherwise fused, relayed over a network, and fed into a cloud-based machine learning facility, which may employ one or more models relating to an operating characteristic of an industrial machine, an industrial process, or a component or element thereof. A model may be created by a human who has experience with the industrial environment and may be associated with a training data set (such as models created by human analysis or machine analysis of data that is collected by the sensors in the environment, or sensors in other similar environments. The learning machine may then operate on other data, initially using a set of rules or elements of a model, such as to provide a variety of outputs, such as classification of data into types, recognition of certain patterns (such as those indicating the presence of faults, orthoses indicating operating conditions, such as fuel efficiency, energy production, or the like). The machine learning facility may take feedback, such as one or more inputs or measures of success, such that it may train, or improve, its initial model (such as improvements by adjusting weights, rules, parameters, or the like, based on the feedback). For example, a model of fuel consumption by an industrial machine may include physical model parameters that characterize weights, motion, resistance, momentum, inertia, acceleration, and other factors that indicate consumption, and chemical model parameters (such as those that predict energy produced and/or consumed e.g., such as through combustion, through chemical reactions in battery charging and discharging, and the like). The model may be refined by feeding in data from sensors disposed in the environment of a machine, in the machine, and the like, as well as data indicating actual fuel consumption, so that the machine can provide increasingly accurate, sensor-based, estimates of fuel consumption and can also provide output that indicate what changes can be made to increase fuel consumption (such as changing operation parameters of the machine or changing other elements of the environment, such as the ambient temperature, the operation of a nearby machine, or the like). For example, if a resonance effect between two machines is adversely affecting one of them, the model may account for this and automatically provide an output that results in changing the operation of one of the machines (such as to reduce the resonance, to increase fuel efficiency of one or both machines). By continuously adjusting parameters to cause outputs to match actual conditions, the machine learning facility may self-organize to provide a highly accurate model of the conditions of an environment (such as for predicting faults, optimizing operational parameters, and the like). This may be used to increase fuel efficiency, to reduce wear, to increase output, to increase operating life, to avoid fault conditions, and for many other purposes. [0664] Methods and systems are disclosed herein for training AI models based on industry-specific feedback, including training an AI model based on industry-specific feedback that reflects a measure of utilization, yield, or impact, and where the AI model operates on sensor data from an industrial environment. As noted above, these models may include operating models for industrial environments, machines, workflows, models for anticipating states, models for predicting fault and optimizing maintenance, models for self-organizing storage (on devices, in data pools and/or in the cloud), models for optimizing data transport (such as for optimizing network coding, network-condition-sensitive routing, and the like), models for optimizing data marketplaces, and many others. [1806] In implementations, a system for self-organizing collection and storage of data collection in a manufacturing environment can include a data collector for handling a plurality of sensor inputs from various sensors. Such sensors can be a component of the data collector, external to the data collector (e.g., external sensors or components of different data collector(s)), or a combination thereof. The plurality of sensor inputs can be configured to sense at least one of an operational mode, a fault mode, and a health status of at least one target system. Examples of such target systems include but are not limited to a power system, a conveyor system, a generator, an assembly line system, a wafer handling system, a chemical vapor deposition system, an etching system, a printing system, a robotic handling system, a component assembly system, an inspection system, a robotic assembly system, and a semi-conductor production system. The system can also include a self-organizing system that can be configured for self-organizing at least one of: (i) a storage operation of the data; (ii) a data collection operation of the sensors that provide the plurality of sensor inputs, and (iii) a selection operation of the plurality of sensor input, as is described herein. [1808] In a non-limiting example, the system can include a plurality of sensors configured to sense various parameters in the environment of a wafer handling system as a target system. Vibration sensors, fluid flow sensors, pressure sensors, gas sensors, temperature sensors, and the like may be utilized by the system to generate data regarding the operation of the wafer handling system. As mentioned herein, any and all of the storage operation, the data collection operation, and the selection operation of the plurality of sensor inputs may be adapted, optimized, learned, or otherwise self-organized by the system. Lin and Deluca and Cella pertain to virtual assistants, semiconductor manufacturing and equipment it would have been obvious to modify Lin and Deluca by having a prediction for a particular device/component use information from other devices/components, in order to optimize that particular component, in order to provide contextual answers to service technicians (ibid-see Cella above, wherein it is the explicit shared services and artificial intelligence and analytics between components, that optimize the prediction for maintenance, repairs, etc., in AI enabled assistant environment). This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Regarding claim 2, Deluca further teaches: The system of Claim 1, wherein the NLP engine comprises: a co-referencing module configured to identify one or more previous conversations between the user and the virtual assistant; [0047]… In at least one embodiment, the EMA program 110a, 110b may include a list of previously submitted queries or frequently asked questions, or a list of previously inquired assets or frequently inquired assets, (e.g., via a drop box) in which the field technician may review and utilize to submit a query to the EMA program 110a, 110b. an intent identifier configured to identify the intent of the user in the user query; [0051] …In at least one embodiment, the analyzer may utilize key words included in the query provided by the field technician to match with the corpus of available information included in knowledge base 210. an entity extractor configured to extract one or more entities in the user query, wherein the one or more entities comprises the processing component; [0051] …The EMA program may utilize a search module to parse through the knowledge base 210 to retrieve the knowledge base data (e.g., data in the knowledge base 210 that may be associated with the physical asset, or asset data) associated with the query submitted by the field technician…The search module may utilize an analyzer to use natural language processing (NLP) techniques (e.g., structure extraction, language identification, tokenization, decompounding, lemmatization/stemming, acronym normalization and tagging, entity extraction, phrase extraction)… an action predictor configured to predict the action to be performed by the virtual assistant based on the one or more previous conversations, identified intent, and extracted entities; [0027] … The information (i.e., asset data from the knowledge base 210, or knowledge base data) may include any personal observations made by one or more previous field technicians (e.g., any asset part should be serviced shortly, the asset should be placed in a different location to prevent any further malfunctions or performance interruptions), details of the work performed by any previous field technicians, and/or any additional recommendations by any previous field technicians (e.g., decommission the asset, order additional asset parts, contact the manufacturer or another specialist). [0032] … The change data may be generated by IoT sensor readings associated with the asset, or information provided by a field technician associated with maintenance performed, work orders completed, parts replaced and/or removed, artificial intelligence (AI) predictions,.. [0051] … As a result, the analyzer may interpret the context and meaning for the words, phrases and/or sentences parsed by the textual data. In at least one embodiment, the analyzer may utilize key words included in the query provided by the field technician to match with the corpus of available information included in knowledge base 210. an action performer configured to call one or more action handlers to perform the next action; and [0046] Referring now to FIG. 3, an operational flowchart illustrating the exemplary EMA querying process 300 used by the EMA program 110a, 110b according to at least one embodiment is depicted. [0047] At 302, a query is received…In at least one embodiment, if the query, or the asset that the query associated with, is unclear, the EMA program 110a, 110b may prompt (e.g., via a dialog box) the field technician by requesting additional information to appropriately address the query received. [0051] Next, at 304, knowledge base data is retrieved from the knowledge base 210. [0054] Then, at 306, a response is provided… [0058]… The EMA program 110a, 110b may prompt (e.g., via dialog box) the field technician to provide feedback in which the field technician may provide comments associated with the usefulness of the EMA program 110a, 110b. [0060] …In some embodiments, the field technician may command a virtual assistant or audio-enabled device to ignore the feedback request from the EMA program 110a, 110b. [0041] In at least one other embodiment, if such a type of digital twin file is modified and /or added, then the EMA program 110a, 110b may be triggered to retrain the knowledge base 210 Note: Each of these are actions by the handler. a response generator configured to generate the context-specific response to the user query based on results produced by the one or more action handlers. [0051] … As a result, the analyzer may interpret the context and meaning for the words, phrases and/or sentences parsed by the textual data. In at least one embodiment, the analyzer may utilize key words included in the query provided by the field technician to match with the corpus of available information included in knowledge base 210. [0054] Then, at 306, a response is provided… [0021]… Additionally, as the digital twin is updated, the EMA program may answer more contextual details regarding the asset and the corresponding usage of the asset. See motivation from claim 1, as it is applicable here as well. Regarding claim 4, DeLuca further teaches: The system of Claim 1, wherein the NLP engine is used for one or more of: named entity recognition; text generation; or question answering. [0051] … The search module may utilize an analyzer to use natural language processing (NLP) techniques (e.g., structure extraction, language identification, tokenization, decompounding, lemmatization/stemming, acronym normalization and tagging, entity extraction, phrase extraction)… As a result, the analyzer may interpret the context and meaning for the words,… See motivation from claim1. Regarding claim 5, Lin teaches: The system of Claim 1, wherein the virtual assistant is configured to assist the user with respect to one or more semiconductor-manufacturing tools. [0015] The following description provides a new and unique virtual expert system to assist engineers in trouble shooting and maintaining semiconductor tools. Regarding claim 7, DeLuca further teaches: The system of Claim 1, further comprising: a content search engine accessible to one or more of the virtual assistant or the NLP engine, the content search engine including processed data from one or more data sources, wherein the one or more of the virtual assistant or the NLP engine uses the processed data to provide the context-specific response to the user query. [0002] Equipment Maintenance Assistant (EMA),… [0022] According to at least one embodiment, a field technician may query the EMA program. The EMA program may utilize a search engine to parse through the knowledge base to retrieve the applicable answer to the query of the field technician… [0020] …The data specific to the change may include Internet of Things (IoT) sensor readings associated with IoT devices from the physical asset, maintenance performed, work order completed, parts replaced, AI predictions and description of the failure (e.g., including any external data that may impact failure of the asset, such as weather). [0021] According to at least one embodiment, upon association of the digital twin with the personal assistant, a digital depositor persona may select one or more digital twin resources that includes the updated data. The selected resources may then be added to the corpus of available information for the asset and may be used to train the knowledge base… Additionally, as the digital twin is updated, the EMA program may answer more contextual details regarding the asset and the corresponding usage of the asset. See motivation from claim1. Regarding claim 8, DeLuca further teaches: The system of Claim 7, wherein the processed data comprises data extracted from one or more of: user manuals, Portable Document Format (PDF) files, PowerPoint (PPT) files, text-data files, [[or]] and media files associated with one or more semiconductor-manufacturing tools. [0020] According to at least one embodiment, when the physical asset is used, the EMA program may update the digital twin to include data specific to the change of the asset to mimic a similar or same state as the physical asset. …In the present embodiment, the data may include documents, images, audio files and any other format of transmitting data. Regarding claim 9, Deluca further teaches: The system of Claim 1, further comprising: an artificial intelligence (Al) engine configured to monitor operations of one or more semiconductor manufacturing tools and predict failure conditions. [0013] As previously described, Equipment Maintenance Assistant (EMA), … may utilize artificial intelligence (AI) services and IBM Watson® Knowledge Studio) and advanced Bayesian networks to train EMA based on work orders, service alerts, and other relevant information to assist a technician or operator and field service teams perform an assigned job. [0020] …The data specific to the change may include Internet of Things (IoT) sensor readings associated with IoT devices from the physical asset, maintenance performed, work order completed, parts replaced, AI predictions and description of the failure (e.g., including any external data that may impact failure of the asset, such as weather)… See motivation from claim1. Regarding claim 10, Deluca further teaches: The system of Claim 9, wherein the Al engine is configured to generate a response to the user query received via the virtual assistant, the response being semantically matched to the user query. [0051]… The EMA [Equipment Maintenance Assistant] program may utilize a search module to parse through the knowledge base 210 to retrieve the knowledge base data (e.g., data in the knowledge base 210 that may be associated with the physical asset, or asset data) associated with the query submitted by the field technician. The search module may utilize an analyzer to use natural language processing (NLP) techniques… [0054] Then, at 306, a response is provided. When the EMA program 110a, 110b identifies one or more pieces of asset data from the knowledge base 210 (i.e., knowledge base data) that may serve as a response to the query submitted by the field technician, See motivation from claim1. Regarding claim 11, DeLuca further teaches: The system of Claim 1, further comprising: user interface components to display the user query and the context-specific response by the virtual assistant. [0039] In at least one embodiment, when the digital twin is updated, the EMA program 110a, 110b may answer more contextual details associated with the asset and the usage of the asset (i.e., asset usage). [0072] Each of the sets of external components 904a, b can include a computer display monitor 924… [0054] Then, at 306, a response is provided. When the EMA program 110a, 110b identifies one or more pieces of asset data from the knowledge base 210 (i.e., knowledge base data) that may serve as a response to the query submitted by the field technician,… See motivation from claim1. Regarding claim 12, DeLuca further teaches: The system of Claim 1, wherein the user query is a natural language query. [0047] … For example, on the main screen of the EMA program 110a, 110b, the field technician will be prompted, via a dialog box, to include in natural language a query associated with an asset. See motivation from claim1. Regarding claim 13, Lin teaches: The system of Claim 1, wherein the user is one of a field service engineer, a technician, or a process engineer associated with a semiconductor-manufacturing system. [0015] The following description provides a new and unique virtual expert system to assist engineers in trouble shooting and maintaining semiconductor tools. Regarding claim 14, DeLuca teaches: The system of Claim 1, wherein the virtual assistant is one of a conversational bot, smart bot, a text chat bot, a speech-to-text chat bot, or a virtual consultant. [0051] … The search module may utilize an analyzer to use natural language processing (NLP) techniques… See motivation from claim1. Regarding claim 15, DeLuca teaches: The system of Claim 1, wherein the virtual assistant is an NLP-based bot. [0051] … The search module may utilize an analyzer to use natural language processing (NLP) techniques… See motivation from claim1. Regarding claim 16, A method comprising: providing a virtual assistant in communication with a semiconductor-manufacturing system, the semiconductor-manufacturing system comprising a wafer handling system configured to hold one or more wafers for processing, processing components configured to physically treat the one or more wafers, and a controller configured to operate the processing components; receiving, by the virtual assistant, a user query from a user related to one or more of: repair, maintenance, or usage of a processing component of the processing components; monitor operations of the processing components during physically treating the one or more wafers; processing, using a natural language processing (NLP) engine, the user query to generate a context-specific response to the user query, wherein processing the user query comprises: understanding an intent or context of the user query; predicting an action to be executed by the processing component using information obtained from one or more of the other processing components, wherein the action optimizes operations of the processing component according to the intent or context of the user query, wherein the action provides for one or more of: repair, maintenance, or usage of the one or more processing components; and generating the context-specific response to the user query that contains information related to the predicted action; and providing, by the virtual assistant, the context-specific response to the user wherein operations of the processing component are optimized based on the context- specific response. Claim 16 is a method claim with limitations similar to the limitations of Claim 1 and is rejected under similar rationale. Regarding claim 17, The method of Claim 16, wherein processing, using the NLP engine, the user query comprises: identifying one or more previous conversations between the user and the virtual assistant; identifying the intent of the user in the user query; extracting one or more entities in the user query, wherein the one or more entities comprises the processing component; predicting the action to be performed by the virtual assistant based on the one or more previous conversations, identified intent, and extracted entities; calling one or more action handlers to perform the next action; and generating the context-specific response to the user query based on results produced by the one or more action handlers. Claim 17 is a method claim with limitations similar to the limitations of Claim 2 and is rejected under similar rationale. Regarding claim 18, The method of Claim 16, wherein the virtual assistant is configured to assist the user with respect to one or more semiconductor-manufacturing tools. Claim 18 is a method claim with limitations similar to the limitations of Claim 5 and is rejected under similar rationale. Regarding claim 20, The method of Claim 16, wherein the user is one of a field service engineer, a technician, or a process engineer associated with the semiconductor-manufacturing system. Claim 20 is a method claim with limitations similar to the limitations of Claim 13 and is rejected under similar rationale. Regarding claim 21, DeLuca further teaches: The system of Claim 1, wherein the virtual assistant is further configured to execute the predicted action on the one or more processing components. [0016] According to at least one embodiment, the customer may initiate an instance (i.e., change) of the EMA program (i.e., field technician personal assistant program). The EMA program may reduce mean time to repair (MTTR), reduce troubleshooting time, provide recommended actions, parts, materials and tools… See motivation from claim1. Regarding claim 22, DeLuca further teaches: The system of Claim 1, wherein the user query comprises an error code, and wherein the context-specific response comprises one or more of: steps to repair the processing component associated with the error code, data containing information related to a repair procedure. [0019] … The base information from the digital twin resources (i.e., base digital twin resources) may include … maintenance history, …, fault codes, scheduled maintenance plans, usage (e.g., Internet of Things (IoT) sensor readings), … safety notifications/alerts, repair procedures, and troubleshooting tips. In the present embodiment, any changes associated with the base digital twin resources may be utilized to determine whether there was a change to the digital twin associated with a physical asset. See motivation from claim1. Regarding claim 24, DeLuca further teaches: The system of Claim 1, wherein the predicted action provides for improving one or more tool driven metrics, wherein the one or more tool driven metrics comprises one or more of: a reduce mean time to repair (MTTR) metric, an increase mean time between failure (MTBF) metric, and entitlement metric. [0016] … The EMA program may reduce mean time to repair (MTTR), … enhance mean time between failure (MTBF) rate through recommended actions, and improve first time to fix (FTTR) rate. See motivation from claim1. Claims 3 are rejected under are rejected under 35 U.S.C. 103 as being unpatentable over Lin view of DeLuca in view of Cella, in further view of Okamoto (WO 2021006117) and in further view of Honda (US 20210294950) Regarding claim 3, Okamoto teaches: The system of Claim 1, wherein the NLP engine is a trained model that is trained based on a transformer model architecture and using relatively large volumes of semiconductor data. [page 6 paragraph 2] … In the text speech synthesis {TIS) technology that synthesizes natural speech from text,[...]) and (page 36, paragraph 6: Further, in the speech synthesis processing device 300, the encoder unit 3 and the decoder unit 5 are not limited to the above configuration, and may have other configurations. For example, the encoder unit 3 and the decoder unit 5 may be configured by adopting the encoder and decoder configurations based on the transformer model architecture disclosed in Document A below. Lin/Deluca/Cella and Okamoto pertain to Semiconductor, manufacturing and NLP; it would have been obvious to modify claim 1 by including a transformer architecture, as described in Okamoto, in order to encode and decode the data. This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Honda teaches: The system of Claim 1, wherein the NLP engine is a trained model that is trained based on a transformer model architecture and using semiconductor data. [0003] Recently, the application of Machine Learning (“ML”) algorithms has become popular for use with semiconductor manufacturing processes. Generally, an ML model can be constructed for a specific process parameter by sampling relevant data in order to build one or more training sets of data to represent expected performance of the process with regard to that parameter. [0064] FIG. 7 illustrates an exemplary process 700 for generating a predictive model for failure detection and classification. In step 702, input data is collected from many data sources during a production run for semiconductor devices. Lin/Deluca/Okamoto and Honda pertain to Semiconductors and ML; it would have been obvious to modify system of claim 1 by including Semiconductor data as described in Honda, in order to improve the accuracy of the ML model within the semiconductor context. This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Claims 23 are rejected under are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of DeLuca in view of Cella and in further view of Rhee (US 20210366749) Regarding claim 23, Deluca teaches: The system of Claim 1, wherein the user query comprises a natural language description of an issue while treating a wafer, and wherein the context-specific response comprises a best-known recipe for treating the wafer. [0047] … For example, on the main screen of the EMA program 110a, 110b, the field technician will be prompted, via a dialog box, to include in natural language a query associated with an asset. See the motivation in the rejection of claim 1. Rhee further teaches: …and wherein the context-specific response comprises a best-known recipe for treating the wafer. [0038] Task data module 214 may access task data that identifies the tasks for manufacturing the product. The task data may identify the steps and an order of the steps that manufacturing tools perform on the product. Task data module 214 may enable computing device 120 to receive the task data via user input, a data storage object, or combination thereof. In one example, the task data may be a sequence program that is stored in a centralized data store that is accessible over a network (e.g., recipe repository, sequence repository). Lin/DeLuca/Cella and Rhee pertain to Semiconductors and manufacturing it would have been obvious to modify the system claim 1 by including recipe repositories, as described in Rhee, in order to assist with wafer treatment. This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Claims 25 is rejected under are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of DeLuca in view of Cella and in further view of Malhotra et al. (Malhotra, US 2021/0103606). Regarding claim 25, Malhotra further teaches: wherein the user query comprises a series of ongoing queries between the user and the virtual assistant, wherein the series of ongoing queries comprises a first user query and a second user query received after the first user query, wherein the NLP is further configured to: [0007] In some aspects, the media guidance application may generate a neural network that takes a previous query and a current query as inputs and outputs a result indicating a merge or replace operation, where the neural network comprises a first set of nodes associated with an input layer of the neural network and a second set of nodes associated with an artificial layer of the neural network. For example, the media guidance application may generate a neural network to model and predict a user's intention to either merge or replace a portion in a first and second queries. For example, the media guidance application may generate a first set of nodes corresponding to an input layer of the neural network and may associate each node of the neural network with a corresponding word or phrase. The media guidance application may also generate a second set of nodes corresponding to an artificial layer in the neural network, where each node of the second set of nodes is connected to at least one node of the first set of nodes. The media guidance application may utilize the input nodes to represent words in the first and second queries. For example, the media guidance application may map words in the first query and words in the second query to the words associated with nodes in the first set of nodes. The media guidance application may retrieve weights associated with the connections between the first set of nodes and the second set of nodes to compute values for the second set of nodes (e.g., by multiplying values in the first set of nodes by the weights and then summing the resultant multiplications). The media guidance application may retrieve the values associated with the nodes in the second set of nodes to determine whether to merge or replace the first and second queries. Because the media guidance application trains the neural network to model whether a query should be merged or replaced based on words in the queries and then utilizes the training data (e.g., based on the weights between nodes), the neural network is able to predict merge and replace operations for queries that were not already in the training set. [0008] The media guidance application may train the neural network, based on a training data set, to determine weights associated with connections between the first set of nodes and the second set of nodes in the neural network. For example, the media guidance application may retrieve a training data set from memory wherein the training data set comprises a pair including a model current query and a model previous query and a flag indicating whether the model previous query and model current query should be merged or replaced. For example, the training data set may comprise a first pair with a first query “What is the weather like in New York?” and a second query “How about in D.C.?” and a corresponding replace flag (e.g., because the user's intent is to ask “What is the weather like in D.C.?” by replacing New York with D.C. in the first query), and a second pair with a first query “What are some Tom Cruise movies?” and a second query “Are any action movies?” and a corresponding merge flag (e.g., because the user's intent is to ask “What are some Tom Cruise action movies?” and therefore the queries should be merged to update the context). [0009] In some embodiments, the media guidance application may input the model previous query and the model current query to nodes of the first set of nodes. For example, the media guidance application may identify words in the first query and may map the words in the first query to words associated with nodes in the first set of nodes (e.g., the nodes on the input layer of the neural network). For example, the mapping may include incrementing, by the media guidance application, a value associated with each node in the first layer that corresponds to a word in the first query. Because the media guidance application may compute the values of the second set of nodes based on multiplying the value of the first set of nodes and weights associated with those nodes, the incrementing has the effect of weighting the decision as to whether the query should be merged or replaced. Likewise, the media guidance application may map words in the second query to words associated with nodes in the first set of nodes and may increment a value associated with the mapped nodes. [0001] Context maintenance is an important attribute of modern natural language processing systems to allow a user to communicate with a computer system in a normal conversational manner. For example, a user may prompt the search system with a first query, “Show me supermarkets open now,” followed by a second prompt, such as, “That sell organic goods.” In a conversational setting, a human would understand the user to be searching for supermarkets that are open now and that sell organic goods. Alternatively, the user may follow the first prompt with a third prompt such as, “How about bodegas?” In a conversational setting, a human would understand the user changed the context of the search and is now instead searching for bodegas open now. Oftentimes, computers struggle with determining whether to maintain context between two queries or to perform a context switch. The conventional approach to solve this problem is to define a set of rules to determine whether a first query and a second query are interrelated and perform the context switch when they are not related. However, rule-based systems are rigid and require programmers to think about and try to address every possible situation that may arise during a natural language conversation, resulting in a system that has only a limited number of possible query inputs. Therefore, the user is burdened with learning the inputs recognized by the system or must rephrase queries to receive desired results from the convention system. based on receiving the second user query, identify the first user query between the user and the virtual assistant (ibid-see above, first and second query from user to neural network conversational system); map elements of the second user query to elements of the first user query (ibid-see above mapping discussion), wherein the mapping comprises assigning weight to each element of the first user query indicative of relative importance of each element of the first user query to each element of the second user query (ibid-his mapping for word elements based on words with respective weights, in the query, the weights interpreted as importance); and determine the intent or context of the user query based on replacing one or more elements in the second query with one or more elements from the first query based on the mapping (ibid-his prediction of a user’s “intent” to replace a portion in the second query, with words from the first query based on the mapping). Lin/Deluca and Malhorta pertain to NLP it would have been obvious to modify the Lin in view of DeLuca by including a series of queries and utilizing context and intention to replace elements within a query in order to maintain a context based on a series and history of queries, in order to perform a search operation (ibid-Malhortra, see also abstract). This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892). Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAMONT M SPOONER/ Primary Examiner, Art Unit 2657 10/10/2025
Read full office action

Prosecution Timeline

Jul 19, 2022
Application Filed
Aug 26, 2024
Non-Final Rejection — §103
Nov 05, 2024
Interview Requested
Nov 12, 2024
Applicant Interview (Telephonic)
Nov 12, 2024
Examiner Interview Summary
Nov 13, 2024
Response Filed
Jan 29, 2025
Final Rejection — §103
Mar 18, 2025
Interview Requested
Mar 25, 2025
Examiner Interview Summary
Mar 25, 2025
Applicant Interview (Telephonic)
May 05, 2025
Request for Continued Examination
May 07, 2025
Response after Non-Final Action
May 31, 2025
Non-Final Rejection — §103
Sep 04, 2025
Response Filed
Oct 10, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602542
Text Analysis System, and Characteristic Evaluation System for Message Exchange Using the Same
2y 5m to grant Granted Apr 14, 2026
Patent 12596881
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12591737
Systems and Methods for Word Offensiveness Detection and Processing Using Weighted Dictionaries and Normalization
2y 5m to grant Granted Mar 31, 2026
Patent 12572744
Generative Systems and Methods of Feature Extraction for Enhancing Entity Resolution for Watchlist Screening
2y 5m to grant Granted Mar 10, 2026
Patent 12518107
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+11.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 603 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month