Prosecution Insights
Last updated: April 19, 2026
Application No. 17/656,201

ALWAYS-ON LOCAL ACTION CONTROLLER FOR LOW POWER, BATTERY-OPERATED AUTONOMOUS INTELLIGENT DEVICES

Final Rejection §102§103
Filed
Mar 23, 2022
Examiner
FLANDERS, ANDREW C
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Aondevices Inc.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
574 granted / 775 resolved
+12.1% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
9 currently pending
Career history
784
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
31.6%
-8.4% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 775 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, filed 24 April 2025, with respect to the objections to the specification have been fully considered and are persuasive. The objections of the specification have been withdrawn. Applicant’s arguments, filed 24 April 2025, with respect to the rejection of claim 2 under 35 U.S.C. 112 have been fully considered and are persuasive. The rejection of claim 2 under 35 U.S.C. 112 has been withdrawn. Applicant’s arguments with respect to the rejection of the claim(s) under 35 U.S.C. 103 have been considered but are moot in view of the newly amended rejections necessitated by applicant’s amendments to the claims and detailed below. As a general note, while Siminoff details an invention somewhat different than applicant’s assertions in the remarks, the current set of claims does not preclude an interpretation and application of Siminoff as detailed below. Election/Restrictions Newly submitted claims 26 – 33 are directed to an invention that is independent or distinct from the invention originally claimed for the following reasons: Amended independent claims 1, 21, 26, and 24 are each directed to An always-on local action controller, each containing a number of variations. Each independent and associated dependent claims are directed to species of similar inventions. While 1, 21, and 34 cover distinct elements not present in the others, e.g. claim 1 includes integration into remote controller and an action signal that allows for volume control and muting, claim 21 includes integration into a mobile communications device and an action signal that initiates a telephone call, and claim 34 includes integration into a vehicle along with an action signal that initiates an alert to a pre-designated contact. However, claim 26 and its associated dependents include significant differences, including an explicit first and second sensor, each receiving a first and second external input, establishing patterns of first signal data and second signal data corresponding to the first and second sensor, each first and second signal data directed to specific first and second events of distress and user endangerment. While 1, 21, and 34 may be directed to species of inventions that are related but distinct from each other, they cover subject matter similar enough to the originally filed claim set that it is not a burden on examination. Claim 26 and its dependents however, are not and cover a distinct species directed to subject matter not originally presented that would cause a burden to examine. Since applicant has received an action on the merits for the originally presented invention, this invention has been constructively elected by original presentation for prosecution on the merits. Accordingly, claims 26 – 33 are withdrawn from consideration as being directed to a non-elected invention. See 37 CFR 1.142(b) and MPEP § 821.03. To preserve a right to petition, the reply to this action must distinctly and specifically point out supposed errors in the restriction requirement. Otherwise, the election shall be treated as a final election without traverse. Traversal must be timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are subsequently added, applicant must indicate which of the subsequently added claims are readable upon the elected invention. Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention. Claim Objections Claim 1 is objected to because of the following informalities: Claim 1 recites “...a sound of breaking class...” which appears as though it should read “...a sound of breaking glass...” for the purpose of expediting prosecution, the claim will be interpreted in this manner. Claims 6 – 11 are objected under the same grounds as being dependent upon an objected base claim. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 6 – 8, 21 – 24, and 34 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Siminoff et al. (hereinafter Siminoff, U.S. Patent 11,217,076). Regarding Claim 1, Siminoff discloses: An always-on local action controller (e.g. “A/V Recording and Communication Device” Col. 3, lines 7-10; methods for using audio and video data to detect tampering. Col. 4, lines 37-55; the result of a determination of tampering can be activation of devices, therefore under its broadest reasonable interpretation, the system can be considered an action controller. Col. 26, lines 8-15; first A/V recording device actively capturing audio and/or video data may be used to transition other devices from passive state to active state, thus the first A/V device can be considered an always-on local action controller.) incorporated into a remote controller for an entertainment device (e.g A/V Recording and communication device 200 includes communication model configure to transmit data to a “remote” network device; see Fig. 2 and col. 12 lines 6 – 31; note further distribute network elements, control elements present for the one or more A/V recording and communication devices; col. 30 lines 18 – 57; and see further wireless speaker 112 which serves to produce an audible sound responsive to the A/V recording communication device; col. 9 lines 12 – 36), the always-on local action controller comprising: one or more sensors each receptive to an external input, the respective external inputs being translatable to corresponding signals, a first one of the one or more sensors being a microphone and a first external input being an audio wave (Fig. 1, elements camera 230 and microphone 214; note also capture of audio through the microphone 214 col. 6 line 66 – col. 7 line 1); one or more always-on data analytic neural network subsystems each connected to a respective one of the one or more sensors and receptive to the signals outputted therefrom (Col. 13, lines 3-5; audio analysis modules extract features from the audio signals. Lines 28-32; video analysis modules extract features from the videos/images signals. Col. 16, lines 24-33; high-level processing step involves processing the feature data to classifying detected objects. Lines 37-46; computer vision module uses machine learning, such as convolutional neural networks. Col. 37, lines 51-54; audio features processed by neural network computer model. Thus the audio analysis modules and video analysis modules/computer vision module may be considered as always-on data analytic neural network subsystems each connected to a respective one of the one or more sensors and receptive to the signals outputted therefrom.), an event detection being raised by a given one of the always-on data analytical neural network subsystems in response to a pattern of signal data corresponding to an event (Col. 3, lines 7-10; methods for using audio and video data to detect tampering, i.e. detect an event. Lines 16-18; determination of an occurrence of tampering, i.e. an event detection, being raised by based on processing the audio data by the audio analysis module, i.e. a given one of the always-on data analytical neural network subsystems. Lines 40-43 similarly detail the determination of an instance of tampering based on the analysis of the video data. Col. 13, lines 15-27; audio analysis modules may carryout pattern matching. Col. 15, lines 43-53; the sensors of the disclosure may be connected to a computer vision module which performs functions with respect to computer vision. Col. 15, lines 32-40; computer vision tasks include using pattern matching to differentiate objects in image data. Col. 14, lines 56-60; computer vision methods involve manipulating high-dimensional data to produce decisions) associated with an auditory emergency of a sound of a crying infant, a sound of breaking glass, or a sound of an alarm (e.g. note use of glass-break sensors for intrusion detection, which are used to send alerts response to the data from the security sensors and also configured to alter the state of the system to an armed state; col. 9 lines 11 – 36 and col. 48 lines 51 – 63; see further other A/V recording devices collecting audio and/or video data used to determine the occurrence of tampering in the same neighborhood; col. 4 line 54 – col. 5 line 5; in other words, another A/V device collects the alarm sound triggered by the others; further note detecting specific patterns of sounds associated with tampering; col. 31 lines 44 - 50); and a decision combiner connected to each of the one or more always-on data analytic neural network subsystems, an action signal to a volume control setting of the remote controller to reduce or mute an output volume of the entertainment device being generated (e.g. note control of smart home devices including turning on/off, or turning up/down, or disarming the various devices in the system; col. 49 line 62 – col. 50 line 6; and triggering an audible or other alarm on wireless speaker device 112; col. 9 lines 20 – 30; outputting that alarm on speaker 112; col. 9 lines 12 – 36; finally note that the alarm sound can be disabled; col. 35 lines 30 – 35; disabling is a similar form of control detailed by the on/off, up/down above, and in its broadest sense can be construed as “reducing or muting” an output volume of the speaker 112) based upon an aggregate of the events provided thereby (Fig. 7, step 704; can determine an occurrence of tampering based on determining both that the audio data and video data are indicative of tampering, i.e. based upon an aggregate of the events determined by the audio analysis modules and video analysis modules/ computer vision module. Col. 36, lines 9-19; this determination can be done by the A/V recording and communication device using the processes of Figs. 6A-D, which detail the individual processes by which each of the audio and visual analysis modules determine occurrences of tampering. Col. 4, lines 27-36; As a result of a determination of an occurrence of tampering, the A/V recording and communication device transmits communications, i.e. sends an action signal, to the system. This is further supported in Col. 4, lines 37-39; In addition, or alternatively, one or more other actions can be performed as a result of a determination of an occurrence of tampering. These actions may include, for example, causing computing or other functions to occur on devices other than the A/V recording and communication device(s) from which audio and/or video data was collected and used to determine the occurrence of tampering.). Regarding Claim 6, in addition to the elements stated above regarding claim 1, Siminoff further discloses: wherein each of the always-on data analytic neural network subsystems includes: a feature extractor connected to the corresponding one of the sensors and receptive to the signal outputted therefrom, feature data associated with the signal being generated by the feature extractor (Col. 13, lines 3-5; audio analysis modules extract features from the audio signals. Lines 28-32; video analysis modules extract features from the videos/images signals); and a neural network connected to the feature extractor and being specific to a modality of the feature extractor, the event detection being generated from patterns of the feature data generated by the feature extractor (Col. 16, lines 24-33; high-level processing step involves processing the feature data to classifying detected objects. Lines 37-46; computer vision module uses machine learning, such as convolutional neural networks. Col. 37, lines 51- 54; audio features processed by neural network computer model. Col. 13, lines 15-27; audio analysis modules may carryout pattern matching. Col. 15, lines 43-53; the sensors of the disclosure may be connected to a computer vision module which performs functions with respect to computer vision. Col. 15, lines 32-40; computer vision tasks include using pattern matching to differentiate objects in image data. Col. 14, lines 56-60; computer vision methods involve manipulating high-dimensional data to produce decisions). Regarding Claim 7, in addition to the elements stated above regarding claim 6, Siminoff further discloses: wherein the neural network is a multi-class classifier neural network (Col. 38, lines 29-32; an example of a computer model at blocks 810-812 is a recurrent neural network (RNN) composed of long short-term memory (LSTM) units. Col. 13, lines 46-57; audio analysis module and video analysis module may include classifiers. An example is given of a classifier that determines if a video frame represents a daytime scene or a nighttime scene, therefore the module may be considered a multi-class classifier). Regarding Claim 8, in addition to the elements stated above regarding claim 7, Siminoff further discloses: wherein the multi-class classifier neural network is selected from a group consisting of: a convolutional neural network (CNN), a long short term memory network (LSTM), a recurrent neural network (RNN), and a multilayer perceptron (MLP) (Col. 38, lines 29-32; an example of a computer model at blocks 810- 812 is a recurrent neural network (RNN) composed of long short-term memory (LSTM) units). Regarding Claim 21, Siminoff discloses: An always-on local action controller (e.g. “A/V Recording and Communication Device” Col. 3, lines 7-10; methods for using audio and video data to detect tampering. Col. 4, lines 37-55; the result of a determination of tampering can be activation of devices, therefore under its broadest reasonable interpretation, the system can be considered an action controller. Col. 26, lines 8-15; first A/V recording device actively capturing audio and/or video data may be used to transition other devices from passive state to active state, thus the first A/V device can be considered an always-on local action controller.) incorporated into a mobile communications device, (e.g A/V Recording and communication device 200 is able to send streaming audio/video for communicating, see also answering, duration of the call, col. 7 lines 9 – 21) the always-on local action controller comprising: one or more sensors each receptive to an external input, the respective external inputs being translatable to corresponding signals, a first one of the one or more sensors being a microphone and a first external input being an audio wave (Fig. 1, elements camera 230 and microphone 214; note also capture of audio through the microphone 214 col. 6 line 66 – col. 7 line 1); one or more always-on data analytic neural network subsystems each connected to a respective one of the one or more sensors and receptive to the signals outputted therefrom (Col. 13, lines 3-5; audio analysis modules extract features from the audio signals. Lines 28-32; video analysis modules extract features from the videos/images signals. Col. 16, lines 24-33; high-level processing step involves processing the feature data to classifying detected objects. Lines 37-46; computer vision module uses machine learning, such as convolutional neural networks. Col. 37, lines 51-54; audio features processed by neural network computer model. Thus the audio analysis modules and video analysis modules/computer vision module may be considered as always-on data analytic neural network subsystems each connected to a respective one of the one or more sensors and receptive to the signals outputted therefrom.), an event detection being raised by a given one of the always-on data analytical neural network subsystems in response to a pattern of signal data corresponding to an event (Col. 3, lines 7-10; methods for using audio and video data to detect tampering, i.e. detect an event. Lines 16-18; determination of an occurrence of tampering, i.e. an event detection, being raised by based on processing the audio data by the audio analysis module, i.e. a given one of the always-on data analytical neural network subsystems. Lines 40-43 similarly detail the determination of an instance of tampering based on the analysis of the video data. Col. 13, lines 15-27; audio analysis modules may carryout pattern matching. Col. 15, lines 43-53; the sensors of the disclosure may be connected to a computer vision module which performs functions with respect to computer vision. Col. 15, lines 32-40; computer vision tasks include using pattern matching to differentiate objects in image data. Col. 14, lines 56-60; computer vision methods involve manipulating high-dimensional data to produce decisions) of a sound of breaking glass or a sound of an alarm (e.g. note use of glass-break sensors for intrusion detection, which are used to send alerts response to the data from the security sensors and also configured to alter the state of the system to an armed state; col. 9 lines 11 – 36 and col. 48 lines 51 – 63; see further other A/V recording devices collecting audio and/or video data used to determine the occurrence of tampering in the same neighborhood; col. 4 line 54 – col. 5 line 5; in other words, another A/V device collects the alarm sound triggered by the others; further note detecting specific patterns of sounds associated with tampering; col. 31 lines 44 - 50); and a decision combiner connected to each of the one or more always-on data analytic neural network subsystems, an action signal to initiate a telephone call from the mobile communications device to a pre-designated contact or an audible alarm being generated (e.g. A/V Recording and communication device 200 is able to send streaming audio/video for communicating, see also answering, duration of the call, col. 7 lines 9 – 21; further note placing a call to emergency services based on the A/V recording and communication device col. 33 lines 24 – 27; and further, call center contact through the network in col. 51 lines 29 – 47;see also control of smart home devices including turning on/off, or turning up/down, or disarming the various devices in the system; col. 49 line 62 – col. 50 line 6) based upon an aggregate of the events provided thereby (Fig. 7, step 704; can determine an occurrence of tampering based on determining both that the audio data and video data are indicative of tampering, i.e. based upon an aggregate of the events determined by the audio analysis modules and video analysis modules/ computer vision module. Col. 36, lines 9-19; this determination can be done by the A/V recording and communication device using the processes of Figs. 6A-D, which detail the individual processes by which each of the audio and visual analysis modules determine occurrences of tampering. Col. 4, lines 27-36; As a result of a determination of an occurrence of tampering, the A/V recording and communication device transmits communications, i.e. sends an action signal, to the system. This is further supported in Col. 4, lines 37-39; In addition, or alternatively, one or more other actions can be performed as a result of a determination of an occurrence of tampering. These actions may include, for example, causing computing or other functions to occur on devices other than the A/V recording and communication device(s) from which audio and/or video data was collected and used to determine the occurrence of tampering.). Claim 22 is directed to limitations identical to those presented in claim 6 and is rejected under the same grounds stated above. Claim 23 is directed to limitations identical to those presented in claim 7 and is rejected under the same grounds stated above. Claim 24 is directed to limitations identical to those presented in claim 8 and is rejected under the same grounds stated above. Regarding Claim 34, Siminoff discloses: An always-on local action controller (e.g. “A/V Recording and Communication Device” Col. 3, lines 7-10; methods for using audio and video data to detect tampering. Col. 4, lines 37-55; the result of a determination of tampering can be activation of devices, therefore under its broadest reasonable interpretation, the system can be considered an action controller. Col. 26, lines 8-15; first A/V recording device actively capturing audio and/or video data may be used to transition other devices from passive state to active state, thus the first A/V device can be considered an always-on local action controller.) incorporated into a vehicle, (e.g. video data can be collected from a camera of an A/V recording and communication device, and analysis of the video data can determine that the camera is moving; col. 4 lines 10 – 15; Oxford English Dictionary [“OED”] provides one definition for “vehicle” of “A general term for: anything by means of which people or goods may be conveyed, carried or transported; a receptacle in which something is or may be placed in order to be moved.” Siminoff’s devices, particularly those including cameras, are incorporated into devices capable of movement as illustrated through the spec, but specifically col. 4 lines 5 – 10, col. 27 lines 60 – 67, and col. 34 line 63 – col. 35 line 3. Given the OED definition of vehicle amounts to a means or something in which something is placed in order to be moved, the elements/devices of Siminoff that are detailed as moveable, can be said to broadly meet incorporation into a vehicle ) the always-on local action controller comprising: one or more sensors each receptive to an external input, the respective external inputs being translatable to corresponding signals, a first one of the one or more sensors being a microphone and a first external input being an audio wave (Fig. 1, elements camera 230 and microphone 214; note also capture of audio through the microphone 214 col. 6 line 66 – col. 7 line 1); one or more always-on data analytic neural network subsystems each connected to a respective one of the one or more sensors and receptive to the signals outputted therefrom (Col. 13, lines 3-5; audio analysis modules extract features from the audio signals. Lines 28-32; video analysis modules extract features from the videos/images signals. Col. 16, lines 24-33; high-level processing step involves processing the feature data to classifying detected objects. Lines 37-46; computer vision module uses machine learning, such as convolutional neural networks. Col. 37, lines 51-54; audio features processed by neural network computer model. Thus the audio analysis modules and video analysis modules/computer vision module may be considered as always-on data analytic neural network subsystems each connected to a respective one of the one or more sensors and receptive to the signals outputted therefrom.), an event detection being raised by a given one of the always-on data analytical neural network subsystems in response to a pattern of first signal data corresponding to an event (Col. 3, lines 7-10; methods for using audio and video data to detect tampering, i.e. detect an event. Lines 16-18; determination of an occurrence of tampering, i.e. an event detection, being raised by based on processing the audio data by the audio analysis module, i.e. a given one of the always-on data analytical neural network subsystems. Lines 40-43 similarly detail the determination of an instance of tampering based on the analysis of the video data. Col. 13, lines 15-27; audio analysis modules may carryout pattern matching. Col. 15, lines 43-53; the sensors of the disclosure may be connected to a computer vision module which performs functions with respect to computer vision. Col. 15, lines 32-40; computer vision tasks include using pattern matching to differentiate objects in image data. Col. 14, lines 56-60; computer vision methods involve manipulating high-dimensional data to produce decisions) associated with an auditory emergency of breaking glass, screaming voices, or crackling fire (e.g. note use of glass-break sensors for intrusion detection, which are used to send alerts response to the data from the security sensors and also configured to alter the state of the system to an armed state; col. 9 lines 11 – 36 and col. 48 lines 51 – 63; see further other A/V recording devices collecting audio and/or video data used to determine the occurrence of tampering in the same neighborhood; col. 4 line 54 – col. 5 line 5; in other words, another A/V device collects the alarm sound triggered by the others; further note detecting specific patterns of sounds associated with tampering; col. 31 lines 44 - 50); and a decision combiner connected to each of the one or more always-on data analytic neural network subsystems, an action signal to initiate an alert to a pre-designated contact being generated (e.g. A/V Recording and communication device 200 is able to send streaming audio/video for communicating, see also answering, duration of the call, col. 7 lines 9 – 21; further note placing a call to emergency services based on the A/V recording and communication device col. 33 lines 24 – 27; and further, call center contact through the network in col. 51 lines 29 – 47;see also control of smart home devices including turning on/off, or turning up/down, or disarming the various devices in the system; col. 49 line 62 – col. 50 line 6) based upon an aggregate of the events provided thereby (Fig. 7, step 704; can determine an occurrence of tampering based on determining both that the audio data and video data are indicative of tampering, i.e. based upon an aggregate of the events determined by the audio analysis modules and video analysis modules/ computer vision module. Col. 36, lines 9-19; this determination can be done by the A/V recording and communication device using the processes of Figs. 6A-D, which detail the individual processes by which each of the audio and visual analysis modules determine occurrences of tampering. Col. 4, lines 27-36; As a result of a determination of an occurrence of tampering, the A/V recording and communication device transmits communications, i.e. sends an action signal, to the system. This is further supported in Col. 4, lines 37-39; In addition, or alternatively, one or more other actions can be performed as a result of a determination of an occurrence of tampering. These actions may include, for example, causing computing or other functions to occur on devices other than the A/V recording and communication device(s) from which audio and/or video data was collected and used to determine the occurrence of tampering.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 9 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over by Siminoff et al. (hereinafter Siminoff, U.S. Patent 11,217,076) in view of Moore, Samuel K. (hereinafter Moore, “Eta Compute Debuts Spiking Neural Network Chip for Edge Ai.” IEEE Spectrum, IEEE Spectrum, 16 Oct. 2018, spectrum.ieee.org/eta-compute-debuts-spiking-neural-network-chip-for-edge-ai). Regarding Claim 9, in addition to the elements stated above regarding claim 7, Siminoff further fails to explicitly disclose: wherein the neural network consumes less than 100 microwatts of power while in operation. In the same field of machine learning, Moore teaches neural networks that consume less than 100 microwatts of power while in operation (Page 3, para. 1: low-power neural network chip. Page 7, para. 1: burns 50 microwatts in listening mode. See also the sub-title: “Chip can learn on its own and inference at 100-microwatt scale”). It would have been obvious to one of ordinary skill in the art at the time of effective filing to combine the low-powered neural network of Moore with the always-on local action controller of Siminoff in order to increase the energy efficiency of the system. Claim 25 is directed to limitations identical to those presented in claim 9 and is rejected under the same grounds stated above. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over by Siminoff et al. (hereinafter Siminoff, U.S. Patent 11,217,076) in view of Zhang et al. (hereinafter Zhang, U.S. Patent Application Publication 2021/0218488). Regarding Claim 10, in addition to the elements stated above regarding claim 1, Siminoff fails to explicitly disclose: wherein the decision combiner is implemented as a logic circuit accepting as input each of the event detections provided by the one or more always-on data analytic neural network subsystems and generates and output of the action signal. Siminoff discloses the always-on local action controller of claim 1. However, while Siminoff does disclose combining the event detections provided by the one or more always-on data analytic neural network subsystems with the A/V device as detailed above, Siminoff does not explicitly disclose a structure wherein the decision combiner is implemented as a logic circuit accepting as input each of the event detections provided by the one or more always-on data analytic neural network subsystems and generates an output of the action signal. Relevant to the problem at hand of combining the inputs from different sensors, Zhang teaches a system and method for multi-sensor fusion (Abstract) using a logic circuit accepting as input and combining the data from multiple sensors to output a decision (Fig. 2, elements Data fusion circuitry 246 accepts as input data from each of sensor units 231. [0003], lines 3-9; multi-sensor fusion takes data in from a plurality of sensors and uses a fusion core to combine the data and output a decision). It would have been obvious to one of ordinary skill in the art at the time of effective filing to combine the logic circuit of Zhang with the always-on local action controller of Siminoff in order to provide a structure capable of combining the needed data to provide precise sensor decisions ([0003], lines 6-13). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over by Siminoff et al. (hereinafter Siminoff, U.S. Patent 11,217,076) in view of Murthy et al. (hereinafter Murthy, U.S. Patent Application Publication 2023/0016037). Regarding Claim 11, in addition to the elements stated above regarding claim 6, Siminoff fails to explicitly discloses: wherein the decision combiner is implemented as a neural network. In the same field of machine learning, Murthy discloses wherein the decision combiner is implemented as a neural network ([0028], lines 11-15; encoded data is combined, i.e. aggregated, and analyzed by machine learning to determine occupancy). It would have been obvious to one of ordinary skill in the art at the time of effective filing to combine the decision combiner implemented as a neural network of Murthy with the system of Siminoff as processing the encoded data together, as opposed to raw data directly from the sensors, improve processing time and make the system more energy efficient, as taught by Murthy ([0029, lines 15-17). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew C Flanders whose telephone number is (571)272-7516. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW C FLANDERS/ Supervisory Patent Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Mar 23, 2022
Application Filed
Oct 21, 2024
Non-Final Rejection — §102, §103
Apr 24, 2025
Response Filed
Jul 26, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12562160
ARBITRATION BETWEEN AUTOMATED ASSISTANT DEVICES BASED ON INTERACTION CUES
2y 5m to grant Granted Feb 24, 2026
Patent 12547835
AUTOMATIC EXTRACTION OF SEMANTICALLY SIMILAR QUESTION TOPICS
2y 5m to grant Granted Feb 10, 2026
Patent 12512089
TESTING CASCADED DEEP LEARNING PIPELINES COMPRISING A SPEECH-TO-TEXT MODEL AND A TEXT INTENT CLASSIFIER
2y 5m to grant Granted Dec 30, 2025
Patent 12394416
DETECTING NEAR MATCHES TO A HOTWORD OR PHRASE
2y 5m to grant Granted Aug 19, 2025
Patent 11328007
GENERATING A DOMAIN-SPECIFIC PHRASAL DICTIONARY
2y 5m to grant Granted May 10, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
88%
With Interview (+14.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 775 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month