Prosecution Insights
Last updated: April 19, 2026
Application No. 18/348,917

CONFIGURABLE MONITORING AND ACTIONING WITH DISTRIBUTED PROGRAMMABLE PATTERN RECOGNITION EDGE DEVICES

Final Rejection §103
Filed
Jul 07, 2023
Examiner
CHEEMA, UMAR
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
Aondevices Inc.
OA Round
4 (Final)
66%
Grant Probability
Favorable
5-6
OA Rounds
5y 4m
To Grant
74%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
154 granted / 235 resolved
+7.5% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
44 currently pending
Career history
279
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to amendment/reconsideration filed 1/14/2026 the amendment/reconsideration has been considered. Claims 1-3, 5-9, 11-18 and 21-24 are pending for examination. Response to Arguments Applicant's arguments are moot in light of the new ground of rejections set forth below. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claims 1, 3, 5-6, 8-9, 11-14, and 16-18, 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Bhattacharyya et al (US 2020/0285997) in view of GUZIK (US 20240104933). As to claim 1, Bhattacharyya discloses a configurable monitoring and actioning system, comprising: one or more programmable edge devices each including a machine learning pattern recognizer, a sensor providing sensor input data to the pattern recognizer, and a memory storing pre-trained machine learning weight values for the pattern recognizer, the machine learning pattern recognizer generating event detections of a first type corresponding to a first pattern recognition event among a plurality of selectable pattern recognition events and associated with a first real-word event based upon evaluations of the sensor input data from the sensor against a first set of the pre-trained machine learning weight values correlated to the real-time event detections of the first type ([0346]. “In some embodiments, the machine being analyzed is a diesel engine within a marine vessel, and the analysis system's goal is to identify diesel engine operational anomalies and/or diesel engine sensor anomalies at near real-time latency, using an edge device installed at or near the engine. Of course, other types of vehicles, engines, or machines may similarly be subject to the monitoring and analysis”; [0363], “In order to create an engine model, a training time period is selected in which the engine had no apparent operational issues. In some embodiments, a machine learning algorithm is used to generate the engine models directly on the edge device, in a local or remote server, or in the cloud. A modeling technique can be selected that offers low model bias (e.g. spline, neural network or support vector machines (SVM), and/or a Generalized Additive Model (GAM))”; claim 8, “the statistical norm is updated to distinguish a pattern of sensor data preceding the anomalous state from a normal state of operation”; [0007], “Both for classification and regression, a useful technique can be used to assign weight to the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones.” [0499], “The method may further comprise receiving stream of sensor data received after the training phase; determining an anomalous state of operation of the system based on differences between the received stream of sensor data received after the training phase; and tagging a log of sensor data received after the training phase with an annotation of anomalous state of operation. The method may further comprise classifying the anomalous state of operation as a particular kind of event”; [0485], “characterizing statistical properties of the stream of data, comprising at least an amplitude dependent parameter and a variance of the amplitude over time parameter for an operating regime representing stable operation; determining a statistical norm for the characterized statistical properties that reliably distinguish between normal operation of the system and anomalous operation of the system; and outputting a signal dependent on whether a concurrent stream of data representing sensed or determined operating parameters of the system represent anomalous operation of the system”; and also [0489]-[0497]. Here, the characterized statistical properties of the stream of data such as an amplitude dependent parameter and a variance of the amplitude over time parameter for an operating regime representing stable operation, are weight values correlated to the real-time event detection of the first type (i.e., normal or stable operation event detection), while updated the statistical norm based on anomaly pattern corresponds to event detection of a second type (i.e., anomaly event detection). See also [0503]-[0506]; see [0094], “our criteria of real-time anomaly detection”. Here, the stable operation pattern recognition event and unstable/anomalous operation pattern recognition event are two selectable pattern recognition events associated with respective real-world events, i.e., stable operation and unstable/anomalous operation respectively. It is to be noted that the claim does not require a specific entity to be able to select the pattern recognition events, but merely requires “selectable”), the machine learning pattern recognizer being reconfigurable to perform another function corresponding to a different pattern recognition event associated with a different real-world event and generate real-time event detections of a second type different from the first type by selecting applying a different set of the pre-trained machine learning weight values correlated to the real-time event detections of the second type for predetermined task actions corresponding thereto (see citation in rejection to the preceding limitation, wherein the detecting anomaly events are of a second type. It is to be noted that the claim does not require a specific relationship between the first type and the second type therefore Examiner interprets as any relationship. See also [0093], “At each point in time we would like to determine whether the behavior of the system is unusual. The determination is preferably made in real-time. That is, before seeing the next input, the algorithm must consider the current and previous states to decide whether the system behavior is anomalous, as well as perform any model updates and retraining.” See [0497], “receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system, and integrating the defined signature with the determined statistical norm, such that the statistical norm may be updated to distinguish a pattern of sensor data preceding the anomalous state from a normal state of operation”, wherein updated statistical norm(s) used for detecting anomalous state are selected and applied different set of the pre-trained machine learning weight values); and an application installable on a user device, the application being in communication with each of the one or more programmable edge devices and executing the predetermined actions based upon the real-time event detection evaluations from the machine learning pattern recognizer (see citation in rejection to the preceding limitation above, and abstract, “outputting a signal based on whether a concurrent stream of data representing sensed operating parameters of the system represent anomalous operation of the system”; [0505], the user’s inputs via an input device in response to predicted anomaly signals received from the edge/sensor device are actions based on the event detection evaluations; see also [0482], “When a failure classification is obtained, the alerts system sends notifications to human operators and/or automated systems.”), the application including a scheduler initiating a reconfiguration based upon a secondary condition, of at least one of the programmable edge devices with the pre-trained machine learning weight values correlated to the real-time event detections of the second type and corresponding predetermined task actions (see citation above, e.g., [0497], “receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system, and integrating the defined signature with the determined statistical norm, such that the statistical norm may be updated to distinguish a pattern of sensor data preceding the anomalous state from a normal state of operation”, wherein the first updated statistical norm(s) are the pre-trained machine learning weight values corrected to the real-time event detections of the second type (i.e., the anomaly event detection), the corresponding actions for the detected abnormity are corresponding task actions, and wherein the receiving the defined signature of sensor data obtained leading to an anomalous state of a second system is a secondary condition that is based on to initiate the reconfiguration. Alternatively, see [0422], “A user interface may be used to view historical engine data and/or error time series data, and to select and tag time periods of interest. Then, the system calculates robust Mahalanobis distances (and/or Bhattacharyya distances) from the z-scores of error data from multiple engine sensors of interests and stores the calculated range for the tagged time periods in the edge device and/or cloud database for further analysis”; and [0093], “before seeing the next input, the algorithm must consider the current and previous states to decide whether the system behavior is anomalous, as well as perform any model updates and retraining.”). However, if the claimed “real-world event” were construed narrower, then Bhattacharyya does not expressly disclose that the pattern recognition events are associated with respective real-world events. GUZIK discloses a concept for pattern recognition events to be associated with respective real-world events (Figure 6, “Co-occurrence learning model” indicates correlation training . See [0060]-[0070], wherein the event classifier 610 reads on the machine leaning pattern recognizer generating real-time event detections of multiple types of events, e.g., the firework and the gunfire are different types of event detections, by utilizing event detector (with context model and co-occurrence model) with different weight values. For example, see [0069], “For example, the third model may determine the likelihood of an occurrence of gunfire from the input 600. In this example, the third model may use features such as venue and date of the detected sound that may coincide with fireworks due to holiday celebrations and measurements in decibels of the sound. In this case, the third model utilizes the combination of the context model 622 feature (e.g., holiday celebration date) and co-occurrence model 624 feature (e.g., actual measurements of the sound) to determine the likelihood of occurrence of the event.” This indicates that a different combination of “whether celebration date” and “actual measurement of sounds” (reading on different weight values) are used to correlate to different event detections, i.e., gunfire or firework, which are real-world events). Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Bhattacharyya with GUZIK. The suggestion/motivation of the combination would have been to differentiate gunfire from firework (GUZIK, [0060]-[0070]). As to claim 9, see similar rejection to claim 1, wherein the updated event detection using the updated machine learning weight values is considered event detections of a second type. As to claim 18, see similar rejection to claim 9. Bhattacharyya-GUZIK also teaches correlating the real-time event detections of the first type to one or more actions of the first type; executing the one or more actions of the first type on the user device (see Bhattacharyya, [0089], “Supervised anomaly detection. Availability of a training data set with labelled instances for normal and anomalous behavior is assumed. Typically, predictive models are built for normal and anomalous behavior, and unseen data are assigned to one of the classes.”; [0422], “Some embodiments of the classification system provide a mechanism (e.g., a design and deployment tool(s)) to select unique, short time periods for an asset and tag (or label) the selected periods with arbitrary strings that denote classification types. A user interface may be used to view historical engine data and/or error time series data, and to select and tag time periods of interest”. See also [0500], “The method may further comprise determining whether a stream of sensor data received after the training phase may be in a stable operating state and tagging a log of the stream of sensor data with a characterization of the stability.” See also GUZIK, as cited in rejection to claim 1, e.g., [0060]-[0070]). As to claim 3, Bhattacharyya-GUZIK discloses system of Claim 1, wherein additional pre-trained machine learning weight values are transmissible from the application on the user device to any one of the one or more programmable edge devices for storage in the respective memories (Bhattacharyya, [0489]-[0497], weight values such as the statistical norm are provided to the edge device by a remote server; [0502], “a model is built and synchronized or communicated by both sides of a communication pair”; [0497], “receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system and performing a signature analysis of a stream of sensor data after the training phase…receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system, and integrating the defined signature with the determined statistical norm, such that the statistical norm may be updated to distinguish a pattern of sensor data preceding the anomalous state from a normal state of operation”; [0422], “A user interface may be used to view historical engine data and/or error time series data, and to select and tag time periods of interest. Then, the system calculates robust Mahalanobis distances (and/or Bhattacharyya distances) from the z-scores of error data from multiple engine sensors of interests and stores the calculated range for the tagged time periods in the edge device and/or cloud database for further analysis”. The time periods selected by the user and sent to the edge device is equivalent to additional weight values). As to claim 5, Bhattacharyya-GUZIK discloses system of Claim 1, wherein the secondary condition is selected from a group consisting of: time-of-day, environmental condition, locale condition, and user preference (see citation in rejection to claim 1, e.g., Bhattacharyya, [0497], “receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system, and integrating the defined signature with the determined statistical norm, such that the statistical norm may be updated to distinguish a pattern of sensor data preceding the anomalous state from a normal state of operation”, wherein receiving the defined signature of sensor data obtained leading to an anomalous state of a second system from the second system is an environmental condition indicating a state at the second system which is environmental to the first system). As to claim 6, Bhattacharyya-GUZIK discloses system of Claim 1, wherein the application includes a library of user-selectable pattern recognition events each having an associated pre-trained machine learning weight value (Bhattacharyya, [0422], “A user interface may be used to view historical engine data and/or error time series data, and to select and tag time periods of interest. Then, the system calculates robust Mahalanobis distances (and/or Bhattacharyya distances) from the z-scores of error data from multiple engine sensors of interests and stores the calculated range for the tagged time periods in the edge device and/or cloud database for further analysis”). As to claim 8, Bhattacharyya-GUZIK discloses system of Claim 1, wherein each of the one or more programmable edge devices includes a wireless communications module), the user device and the application being in communication with the one or more programmable edge devices over a wireless link established thereby (Bhattacharyya, [0525], “The communication network interface device used in the system may enable wireless communications for the transfer of data to and from the computing device”; [0051], “with application to anomaly detection in a real wireless sensor network data set”; [0346], “an edge device installed at or near the engine.”). As to claim 17, Bhattacharyya-GUZIK discloses method of Claim 8, wherein the communication link is wireless (see citation in rejection to claim 8). As to claim 14, Bhattacharyya-GUZIK discloses method of Claim 9, further comprising: receiving a selection of a pattern recognition event from a library of user-selectable pattern recognition events, each pattern recognition event having an associated pre-trained machine learning weight value; and transmitting the pre-training machine learning weight value corresponding to the selected one of the pattern recognition events to the one or more programmable edge devices (Bhattacharyya, [0422], “A user interface may be used to view historical engine data and/or error time series data, and to select and tag time periods of interest. Then, the system calculates robust Mahalanobis distances (and/or Bhattacharyya distances) from the z-scores of error data from multiple engine sensors of interests and stores the calculated range for the tagged time periods in the edge device and/or cloud database for further analysis”). As to claim 11, Bhattacharyya-GUZIK discloses method of Claim 9, wherein the secondary condition is selected from a group consisting of: time-of-day, environmental condition, locale condition, and user preference (see similar rejection to claim 5). As to claim 23. Bhattacharyya-GUZIK discloses the article of manufacture of Claim 18, wherein the secondary condition is selected from a group consisting of: time-of-day, environmental condition, locale condition, and user preference (see similar rejection to claim 11). As to claim 12, Bhattacharyya-GUZIK discloses method of Claim 9, wherein reconfiguraing the given one of the one or more programmable edge devices includes transmitting the different pre-trained machine learning weight value corresponding to the different real-time event detection (see citation and explanation in rejection to claim 9, e.g., Bhattacharyya, [0497], “receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system, and integrating the defined signature with the determined statistical norm, such that the statistical norm may be updated to distinguish a pattern of sensor data preceding the anomalous state from a normal state of operation”). As to claim 13, Bhattacharyya-GUZIK discloses method of Claim 9, further comprising: receiving an excerpt of sensor input data in conjunction with the real-time event detection therefor; and feeding the excerpt of the sensor input data and the corresponding real-time event detection to a machine learning training validator (Bhattacharyya, [0497], “receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system and performing a signature analysis of a stream of sensor data after the training phase…receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system, and integrating the defined signature with the determined statistical norm, such that the statistical norm may be updated to distinguish a pattern of sensor data preceding the anomalous state from a normal state of operation”; [0505], “True anomalies can be detected when a user provides input in near real-time that a predicted anomaly is a false alert or when a threshold set on a sensor is exceeded. Thresholds can either be set by following manufacturer's specifications for normal operating ranges or by setting statistical thresholds determined by analyzing the distribution of data during normal sensor operation and identifying high and low thresholds”). As to claim 16, Bhattacharyya-GUZIK discloses method of Claim 9, further comprising: retrieving a new pattern recognition event with an associated pre-trained machine learning weight value from a remote source (Bhattacharyya, [0506], “In these embodiments, when drift is detected, the system can trigger generation of new models ( e.g., of same or different model types) on the most recent data for the sensor. The system can compare the performance of different models or model types on identical test data sampled from the most recent sensor data and put a selected model“; abstract “responsive to detecting that normal and anomalous operation of the system can no longer be reliably distinguished, determining new statistical properties to distinguish between normal and anomalous system operation”; see citation in rejection to claim 1 such as the abstract for retrieving abnormity signals). As to claim 22, Bhattacharyya-GUZIK discloses the article of manufacture of Claim 18, wherein the method of monitoring and operating one or more programmable edge devices from a user device includes: retrieving a new pattern recognition event with an associated pre-trained machine learning weight value from a remote source (Bhattacharyya, [0497], “receiving a defined signature of sensor data obtained leading to an anomalous state of a second system from the second system, and integrating the defined signature with the determined statistical norm, such that the statistical norm may be updated to distinguish a pattern of sensor data preceding the anomalous state from a normal state of operation”). 7. Claims 2, 21, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Bhattacharyya-GUZIK, as applied to claim 1 above, and further in view of Giro Benet et al (US 2024/0053341). As to claim 2, Bhattacharyya-GUZIK discloses the claimed invention substantially as discussed in Claim 1, but does not expressly disclose wherein the machine learning pattern recognizer is selected from a group consisting of: a multilayer perceptron (MLP), Convolutional Neural Network (CNN), and a recurrent neural network (RNN). Giro Benet discloses a concept for a machine learning pattern recognizer to be selected from a group consisting of: a multilayer perceptron (MLP), Convolutional Neural Network (CNN), and a recurrent neural network (RNN) ([0103]-[0104]). Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Bhattacharyya-GUZIK with Giro Benet. The suggestion/motivation of the combination would have been to utilize known machine learning models (Giro Benet, [0103]-[0105]). As to claim 21, Bhattacharyya-GUZIK in view of Giro Benet discloses the method of Claim 9, wherein the machine learning pattern recognizer is selected from a group consisting of: a multilayer perceptron (MLP), Convolutional Neural Network (CNN), and a recurrent neural network (RNN) (see similar rejection to claim 2). As to claim 24, Bhattacharyya-GUZIK in view of Giro Benet discloses the article of manufacture of Claim 18, wherein the machine learning pattern recognizer is selected from a group consisting of: a multilayer perceptron (MLP), Convolutional Neural Network (CNN), and a recurrent neural network (RNN) (see similar rejection to claim 21). 8. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Bhattacharyya-GUZIK, as applied to claim 1 above, and further in view of Gallo (US 20150382084). As to claim 7, Bhattacharyya-GUZIK discloses claimed inventions substantially as discussed in Claim 1, but does not expressly disclose that the one or more programmable edge devices are organized in a hierarchical relationship over a plurality of locations, the application maintaining the hierarchical relationship in a user interface thereto for managing the one or more programmable edge devices. Gallo discloses one or more edge devices are organized in a hierarchical relationship over a plurality of locations, an application maintaining the hierarchical relationship in a user interface thereto for managing the one or more edge devices ([0157], “a graphical user interface comprising a first element configured to display indicators associated with a plurality of sensors arranged in a hierarchical relationship by location; and a second element configured to display sensor representations associated with the plurality of sensors”. Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Bhattacharyya-GUZIK with Gallo. The suggestion/motivation of the combination would have been to improve user friendliness. As to claim 15, see similar rejection to claim 7. Conclusion 9. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUA FAN whose telephone number is (571)270-5311. The examiner can normally be reached on 9-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUA FAN/ Primary Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Jul 07, 2023
Application Filed
Mar 19, 2024
Non-Final Rejection — §103
Sep 17, 2024
Response Filed
Nov 15, 2024
Final Rejection — §103
May 19, 2025
Request for Continued Examination
May 29, 2025
Response after Non-Final Action
Jul 14, 2025
Non-Final Rejection — §103
Jan 14, 2026
Response Filed
Feb 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598113
APPLYING MANAGEMENT CONSTRAINTS DURING NETWORK SLICE DESIGN
2y 5m to grant Granted Apr 07, 2026
Patent 12585234
METHOD FOR ASSOCIATING ACTIONS FOR INTERNET OF THINGS, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12574801
OPEN RADIO ACCESS NETWORK CLOUD INTELLIGENT CONTROLLER
2y 5m to grant Granted Mar 10, 2026
Patent 12568521
SCHEDULING TRANSMISSION METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12501491
RACH BASED ON FMCW CHANNEL SOUNDING
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
66%
Grant Probability
74%
With Interview (+8.4%)
5y 4m
Median Time to Grant
High
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month