Prosecution Insights
Last updated: April 19, 2026
Application No. 17/095,251

VISUAL ARTIFICIAL INTELLIGENCE IN SCADA SYSTEMS

Non-Final OA §103
Filed
Nov 11, 2020
Examiner
ZHANG, FAN
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Aveva Software, LLC
OA Round
8 (Non-Final)
54%
Grant Probability
Moderate
8-9
OA Rounds
3y 1m
To Grant
71%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
322 granted / 592 resolved
-7.6% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
43 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 592 resolved cases

Office Action

§103
DETAILED ACTION Notice of AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Request for Continued Examination 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on Jan 02nd, 2026 has been entered. Response to Arguments 3. Applicant’s remarks received on Jan. 02, 2026 with respect to the amended independent claims have been acknowledged and are moot in view of a new ground of rejection necessitated by the corresponding amendment. Currently claims 1-20 remain rejected. Response to Amendment Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1-6, 8-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Honglei et al (WO Pub: 2020033635) and in further view of JP’912 (JP Pub: 4976912), Barton et al (US Pub: 2020/0293925), Trivelpiece et al (US Pub: 20200394589), and Goldenberg et al (US Patent: 8,799,282). Regarding claim 1 (currently amended), Honglei et al teaches: A computing device comprising: one or more processors; a non-transitory computer-readable memory having stored therein computer-executable instructions, that when executed by the one or more processors, cause the one or more processors to perform actions comprising [p0174]: identifying an image dataset including a first set of images related to physical assets operating at an industrial site, the first set of images comprising images of the physical assets and the environment at the industrial site [p0061, p0182]; classifying the images in the first set of images of the image data set by defining a binary classifier using a set of categories and applying the binary classifier to the image dataset [p0194], configuring a training model based on the classified images in the first set of images, the training model being configured to trigger a set of cameras located at the industrial site to capture, analyze, and classify a new second set of images that satisfy a criteria based on classification of the images in the first set of images [p0210]; monitoring the condition and health of the physical assets using the training model [p0219]; classifying, automatically, each of the images in the second set of images based on analysis of the second set of images by the training model [p0194, p0219], updating the training model based on information indicating the classification of the second set of images and applying updated training model to capture, analyze and classify a future set of images [p0223, p0226]; and displaying, within a user interface (UI), said second set of images and information indicating said classification based on the analysis and classification performed by the training model [p0181, p0182]. Although a binary classifier is not specified by Honglei et al it would have been an obvious choice for labeling operation adopted by network classification. In the same field of endeavor, JP’912 teaches: classifying the images in the first set of images of the image data set by defining a binary classifier using a set of categories and applying the binary classifier to the image dataset [page 10: claims (p07-p08), page 11: p01-p03]. Therefore, given JP’912’s prescription on using binary classifier applying training data set for labeling determination based on threshold value, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of the two to apply a binary classifier to image set based on threshold value for classification purpose. Honglei et al in view of JP’912 does not explicitly prescribe automatically capturing images. In the same field of endeavor for solving the same problem in terms of capturing image automatically based on trained model, Barton et al teaches: the training model triggering at least one camera in the set of cameras to capture a new second set of images that satisfy a criteria based on classification of the images in the first set of images [p0022, p0023] and continuously update a training model in [0037]. Therefore, given Barton et al’s prescription on utilizing machine learning model to automatically recognize a person to capture its image, and Honglei et al’s disclosure on using training models to recognize defects of physical assets it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to apply Barton et al’s technique to trigger monitor system to automatically capture images of defects based on a training model for defect identification and classification for improving operation efficiency. Honglei et al in view of JP’912 and Barton et al does not disclose images of environment. In the same field of endeavor, Trivelpiece et al teaches: monitoring the condition and health of the physical assets using the training model, identifying an image dataset including a first set of images related to physical assets operating at an industrial site, the first set of images comprising images of the physical assets and the environment at the industrial site [p0006, claim 20]. Therefore, given Trivelpiece et al’s exemplification on taking images of environment at an industrial site for training and classification and Honglei et al’s prescription on performing image classification on physical asset monitoring and controlling through SCADA system using neural network model training technique, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to apply the technique to any industrial field to train a machine learning model with related images for asset identification and classification purpose. Honglei et al discloses confidence level thresholding based on an acceptable degree of certainty and selecting manual classification if below threshold. And JP’912 discloses a performance objective related to balancing error type. In the same field of endeavor, Goldenberg et al determines a tolerable false positive/negative rate: wherein the analysis includes determining a tolerance rate for false positives and false negatives of the second set of images belonging in their respective classification [col 29: lines 22-34]. Therefore, given Goldenberg et al’s tolerance system setting, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to modify Honglei et al’s threshold setting into a tolerance rate driven threshold selection through incorporating known technique to improve similar decision system. Regarding claim 2 (previously presented), the rationale applied to the rejection of claim 1 has been incorporated herein. Honglei et al further teaches: The computing device of claim 1, wherein said image dataset comprises a plurality of predetermined images [p0181, p0182 (history or prior image data)]. Regarding claim 3 (previously presented), the rationale applied to the rejection of claim 1 has been incorporated herein. Honglei et al further teaches: The computing device of claim 1, further comprising: capturing, via at least one of the set of cameras, a third set of images, wherein said identified image dataset comprises said captured third set of images [p0161, p0182 (Any number of sets of images can be captured by the sets of cameras.)]. Regarding claim 4 (previously presented), the rationale applied to the rejection of claim 1 has been incorporated herein. Honglei et al further teaches: The computing device of claim 1, further comprising: analyzing the image dataset, and determining a type of the set of images; and identifying the set of categories based on said determined type [p0191-p0194]. Regarding claim 5 (previously presented), the rationale applied to the rejection of claim 4 has been incorporated herein. Honglei et al in view of JP’912 further teaches: The computing device of claim 4, further comprising: identifying a second set of categories, said second set of categories being based on another type of set of images; and converting settings associated with said second set of categories, said conversion causing a transfer modelling of the second set of categories to correspond to the type of the set of categories, wherein said second set of categories is used for defining said binary classifier [Honglei: p0210, p0220-p0222 (Image sets of different types of products may be converted and classified for application of an existing trained model under certain constraints and objectives.); JP’912: page 10: p07-p08, page 11: p01-p03 (Contents with certain features are collected to train a binary classifier.)]. Therefore, the combined teaching of Honglei et al and JP’912 would have made applying a trained model through conversion to different data set of certain category used for a binary classifier obvious for producing classification with sufficient accuracy on various data sets. Regarding claim 6 (previously presented), the rationale applied to the rejection of claim 1 has been incorporated herein. Honglei et al further teaches: The computing device of claim 1, further comprising: applying said updated training model to a fourth set of images [p0223, p0226]. Regarding claim 8 (previously presented), the rationale applied to the rejection of claim 1 has been incorporated herein. Honglei et al further teaches: The computing device of claim 1, wherein said monitoring is performed when said computing device is in runtime mode [p210, p0217, p0218 (Images are captured and processed continuously.)]. Regarding claim 9 (previously presented), the rationale applied to the rejection of claim 1 has been incorporated herein. Honglei et al further teaches: The computing device of claim 1, wherein said monitoring is automatically performed based on execution of the training model [p0182, p0183]. Regarding claim 10 (previously presented), the rationale applied to the rejection of claim 1 has been incorporated herein. Honglei et al further teaches: The computing device of claim 1, wherein said actions are performed via an image training application executing in association with said computing device [p0194, p0219]. Claims 11 (currently amended), 12, 13 (previously presented), 15 (previously presented), 17 and 18 (previously presented) have been analyzed and rejected with regard to claims 1-3, 6, 9, and 10 respectively. Claim 14 (previously presented) was rejected with regard to claims 4 and 5. Claims 19 (currently amended) and 20 (previously presented) were analyzed and rejected with regard to claims 1 and 6 respectively. 6. Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Honglei et al (WO Pub: 2020033635), JP’912 (JP Pub: 4976912), Barton et al (US Pub: 2020/0293925), Trivelpiece et al (US Pub: 20200394589), and Goldenberg et al (US Patent: 8,799,282); and in further view of JP’826 (JP Pub: JP WO2019239826). Regarding claim 7 (previously presented), the rationale applied to the rejection of clam 1 has been incorporated herein. Honglei et al in view of JP’912 does not provide detail description on image display portions. In the same field of endeavor, JP’826 teaches: The computing device of claim 1, wherein said UI further comprises a display, comprising: a portion for viewing a classification of a captured image within said second set of images; a portion for capturing another set of images for classification; and a portion for selecting images from said other set of images for classification [page 10: p02-p04]. Therefore, given JP’912’s prescription on displaying and selecting different sets of images for classification, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to apply UI for image grouping and classification for improved user experience. Claim 16 (previously presented) has been rejected with regard to claim 7. Contact 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAN ZHANG whose telephone number is (571)270-3751. The examiner can normally be reached on Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Tieu can be reached on 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Fan Zhang/ Patent Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Nov 11, 2020
Application Filed
Feb 25, 2021
Response after Non-Final Action
Apr 05, 2023
Non-Final Rejection — §103
Jul 13, 2023
Response Filed
Jul 30, 2023
Final Rejection — §103
Nov 06, 2023
Request for Continued Examination
Nov 07, 2023
Response after Non-Final Action
Nov 14, 2023
Non-Final Rejection — §103
Feb 20, 2024
Response Filed
Apr 06, 2024
Final Rejection — §103
Aug 12, 2024
Request for Continued Examination
Aug 15, 2024
Response after Non-Final Action
Aug 24, 2024
Non-Final Rejection — §103
Dec 30, 2024
Response Filed
Mar 07, 2025
Non-Final Rejection — §103
Jun 12, 2025
Response Filed
Jun 28, 2025
Final Rejection — §103
Jan 02, 2026
Request for Continued Examination
Jan 16, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582477
COMPUTER-IMPLEMENTED METHOD FOR DETERMINATION OF A BONE CEMENT VOLUME OF A BONE CEMENT FOR A PERCUTANEOUS VERTEBROPLASTY
2y 5m to grant Granted Mar 24, 2026
Patent 12586277
QUASI-NEWTON MRI DEEP LEARNING RECONSTRUCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12579612
SYSTEM AND METHOD FOR CONVOLUTION OF AN IMAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12555364
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12548677
COMPUTER IMPLEMENTED METHOD FOR QUANTIFYING AND PREDICTING THE PROGRESSION OF INTERSTITIAL LUNG DISEASE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

8-9
Expected OA Rounds
54%
Grant Probability
71%
With Interview (+16.5%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 592 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month