Prosecution Insights
Last updated: April 19, 2026
Application No. 17/675,947

ALWAYS-ON WAKE ON MULTI-DIMENSIONAL PATTERN DETECTION (WOMPD) FROM A SENSOR FUSION CIRCUITRY

Final Rejection §103
Filed
Feb 18, 2022
Examiner
FLANDERS, ANDREW C
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Aondevices Inc.
OA Round
4 (Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
574 granted / 775 resolved
+12.1% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
9 currently pending
Career history
784
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
31.6%
-8.4% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 775 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1 – 7 and 9 – 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Black et al. (hereinafter Black, U.S. Patent Application Publication 2016/0091955) in view of Yonetani et al. (hereinafter Yonetani, U.S. Patent Application Publication 2022/0358749). Regarding Claim 1, Black discloses: A device wake-up system to fully activate the device from a sleep mode (e.g. method/medium/apparatus for entering a low-power sleep mode and awaking; [0004] – [0007], comprising: a plurality of sensors each receptive to an external input, the respective external inputs being translatable to corresponding signals (e.g. one or more sensors 250, note corresponding communication as well; Fig. 2 element 250 and para. [0019]). While Black provides detail regarding processing collecting and processing sensor data (see [0023] in the context of sensor fusion engine), Black is not explicit in terms of disclosing: a plurality of feature extractors each connected to a respective one of the plurality of sensors and receptive to the signals outputted therefrom, feature data associated with the signals being generated by each of the corresponding ones of the plurality of feature extractors. In a related field of endeavor (e.g. sensor signal processing), Yonetani discloses performing feature extraction on data obtained from the sensor [0248]. Modifying Black’s system to use the sensor features disclosed by Yonetani further makes obvious: a plurality of feature extractors each connected to a respective one of the plurality of sensors and receptive to the signals outputted therefrom (e.g. portion of controller 21 performing information processing [0136], note inference program 81 causes the apparatus to perform information processing; inference program loaded into RAM and executed [0154]; further note that inference program 81 compries a number of programs including an inspection program 81A [393], a prediction program 81B [0439], a conversation program 81C [482]; a control program 81D [0525]), feature data associated with the signals being generated by each of the corresponding ones of the plurality of feature extractors (e.g. note example of information process or “feature extraction” in [0248], specifically controller using the camera to perform image processing on the obtained image date to estimate gender; [0248]; and see [0260] detailing another form of information processing, in this instance performing speech analysis on the obtained sound data. In essence, the information processing performed by the controller uses a number of programs loaded into RAM which then are used to process information provided by sensor signals, or “a plurality of feature extractors”) It would have been obvious to one of ordinary skill in the art at the time of filing to apply the techniques and components taught by Yonetani to the system of Black. It is likely that Black performs these operations in step 340 given [0026]’s disclosure of “information derived from sensor data,” and in a similar field of multi-sensor fusion/inference, Yonetani expands further on deriving information from sensor data, or information processing. Thus, combination would have naturally followed and been preditable. Furthermore, application of Yonetani’s teachings would have been desirable to one of ordinary skill in the art as the application would further improve Black by improving the accuracy of the inference result generated from combining (“sensor fusion”), see Yonetani [0050], entire spec. The combination of Black in view of Yonetani further makes obvious: a plurality of inference circuits connected to a respective one of the plurality of feature extractors (e.g. inputting the data into multiple inference models to perform predetermined inference; S203; and paras [0277]-[0278] of Yonetani) each of the inference circuits generating separate inference decisions from sensor-specific feature patterns of the feature data of single ones of the plurality of sensors generated by each of the corresponding ones of the plurality of feature extractors (e.g. generating inference result for the target data in the target environment; [0279] of Yonetani; note also the example image recognition and speech analysis discussed above and in Yonetani paras. [0248] and [0260]); and a decision combiner connected to each of the plurality of inference circuits (e.g. combining inference result from each inference model under combine rule; S204 of Yonetani; as modifying sensor fusion engine of Black), a wake signal being generated to fully activate the device placed into the sleep mode with minimal power consumption based upon an aggregated fusion of the separate inference decisions provided by the plurality of inference circuits determining whether aggregate patterns of the inference decisions evaluated across a plurality of sensor-specific feature patterns correspond to one of multiple wake-up conditions (e.g. sensor fusion engine awakes general purpose processor when all the sensor data processing is completed and process is required for further tasks; Black at [0026]; now in view of inference result combining of Yonetani, namely S204), the decision combiner being implemented entirely within the device and operating while the device is in the sleep mode (e.g. Black’s sensor fusion engine may be part of always on subsystem 235; [0019]); wherein the inference decisions and a decision to generate the wake signal are made while in the sleep mode, the wake signal transitioning the device from the sleep mode to an active mode (e.g. sensor fusion engine awakes general purpose processor when all the sensor data processing is completed and process is required for further tasks; Black at [0026]; now in view of inference result combining of Yonetani, namely S204). Regarding Claim 2, in addition to the elements stated above regarding claim 1, the combination further makes obvious: wherein the wake signal is output to an application processor (e.g. sensor fusion engine awakes general purpose processor when all the sensor data processing is completed and process is required for further tasks; Black at [0026]; now in view of inference result combining of Yonetani, namely S204). Regarding Claim 3, in addition to the elements stated above regarding claim 1, the combination further makes obvious: wherein one of the plurality of sensors is a microphone and the external input is an audio wave (e.g. controller 21 may use a microphone as the sensor for obtaining the target data 225 and may obtain sound data including the speech of the user. The controller 21 may perform speech analysis on the obtained sound data; Yonetani para [0260]). Regarding Claim 4, in addition to the elements stated above regarding claim 1, the combination further makes obvious: wherein one of the plurality of one or more sensors is an image sensor and the external input is light photons corresponding to an image (e.g. controller 21 may use a camera as the sensor for obtaining the learning-environment data 35 and may obtain image data including the face of the user. The controller 21 may then perform image processing on the obtained image data and estimates the gender from the face; Yonetani para [0248]). Regarding Claim 5, in addition to the elements stated above regarding claim 1, the combination further makes obvious: wherein one of the plurality of sensors is a motion sensor, and the external input is physical motion applied thereto (e.g. see example use of force sensor and/or a Lidar sensor in Yonetani’s [0527] – [0531]; consider in the alternative Yonetani’s camera system as well). Regarding Claim 6, in addition to the elements stated above regarding claim 1, the combination further makes obvious: wherein the inference circuits are implemented as a multi-class classifier neural network (e.g. machine learning model uses various neural networks and/or a combination of them; see Yonetani para [0177]). Regarding Claim 7, in addition to the elements stated above regarding claim 6, the combination further makes obvious: wherein the multi-class classifier neural network is selected from a group consisting of: a convolutional neural network (CNN) (e.g. a convolutional neural network; Yonetani para [0297]), a long short term memory network (LSTM) (e.g. long short-term memory may be used for recursion layers; Yonetani para [0179]), a recurrent neural network (RNN) (e.g. a recurrent neural network; Yonetani para [0297]), and a multilayer perceptron (MLP) (e.g. a four-layered fully coupled neural network; Yonetani para [0179]). Regarding Claim 9, in addition to the elements stated above regarding claim 1, the combination further makes obvious: wherein the decision combiner is implemented as a logic circuit accepting as input each of the inference decisions provided by the plurality of inference circuits, and generates an output of the wake signal (e.g. combining inference result from each inference model under combine rule; S204 of Yonetani; note use of combining rule 5 [“logic”] for step 204, see Yonetani paras [0279] – [0285]; as modifying sensor fusion engine of Black). Regarding Claim 10, in addition to the elements stated above regarding claim 1, the combination further makes obvious: wherein the decision combiner is implemented as a neural network (e.g. machine learning models each may include a neural network; [0196] of Yonetani; and controller 11 thus combines the inference result from each trained machine learning model 45 together under the combining rule 5 through the processing described below; Yonetani para [0280]; controller combines the weighted inference result from each trained machine learning model 45 together; Yonetani para [00284]) Regarding Claim 11, in addition to the elements stated above regarding claim 1, the combination further makes obvious: wherein the inference circuits are implemented with a machine learning system (e.g. machine learning models each may include a neural network; [0196] of Yonetani; and controller 11 thus combines the inference result from each trained machine learning model 45 together under the combining rule 5 through the processing described below; Yonetani para [0280]; controller combines the weighted inference result from each trained machine learning model 45 together; Yonetani para [00284]). Regarding Claim 12, claim 12 is directed to the method claim corresponding to system claim 1 and is rejected under the same grounds stated above. Regarding Claim 13, claim 13 is directed to the method claim corresponding to system claim 3 and is rejected under the same grounds stated above. Regarding Claim 14, claim 14 is directed to the method claim corresponding to system claim 4 and is rejected under the same grounds stated above. Regarding Claim 15, claim 15 is directed to the method claim corresponding to system claim 5 and is rejected under the same grounds stated above. Regarding Claim 16, claim 16 is directed to the method claim corresponding to system claims 1 and 6 and is rejected under the same grounds stated above. Regarding Claim 17, claim 17 is directed to the method claim corresponding to system claim 7 and is rejected under the same grounds stated above. Regarding Claim 18, claim 18 is directed to the method claim corresponding to system claims 9 and 11 and is rejected under the same grounds stated above. Regarding Claim 19, claim 19 is directed to the method claim corresponding to system claims 1, 9 and 11 and is rejected under the same grounds stated above. Regarding Claim 20, in addition to the elements stated above regarding claim 1, the combination further makes obvious: An article of manufacture comprising a non-transitory program storage medium readable by a computing device, the medium tangibly embodying one or more programs of instructions executable by the computing device to perform a method for fully waking the computing device from a sleep mode (e.g. computer-readable medium including code that when executed by a processor... places in sleep mode... awakes from sleep mode [0006] of Black; see also storage medium for implementing the components of the inference apparatus; [0060] of Yonetani), the method comprising the steps of (see rejection of claim 1 above for remaining claimed limitations). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Black et al. (hereinafter Black, U.S. Patent Application Publication 2016/0091955) in view of Yonetani et al. (hereinafter Yonetani, U.S. Patent Application Publication 2022/0358749) in further view of Moore, Samuel K. “Eta Compute Debuts Spiking Neural Network Chip for Edge Ai.” IEEE Spectrum, IEEE Spectrum, 16 Oct. 2018, spectrum.ieee.org/eta-compute-debuts-spiking-neural-network-chip-for-edge-ai. Regarding Claim 8, in addition to the elements stated above regarding claim 6, the combination fails to explicitly disclose: wherein the inference circuits consume less than 100 microwatts of power while in operation. In the same field of machine learning, Moore teaches: wherein the inference circuits consume less than 100 microwatts of power while in operation (Page 3, para. 1: low-power neural network chip. Page 7, para. 1: burns 50 microwatts in listening mode. See also the sub-title: “Chip can learn on its own and inference at 100-microwatt scale”). It would have been obvious to one of ordinary skill in the art at the time of effective filing to combine the low-powered neural network of Moore with the combination of Black and Yonetani in order to further increase the energy efficiency of the system. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew C Flanders whose telephone number is (571)272-7516. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW C FLANDERS/ Supervisory Patent Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Feb 18, 2022
Application Filed
Mar 01, 2024
Non-Final Rejection — §103
Jul 05, 2024
Response Filed
Sep 10, 2024
Final Rejection — §103
Mar 12, 2025
Request for Continued Examination
Mar 14, 2025
Response after Non-Final Action
Apr 04, 2025
Non-Final Rejection — §103
Oct 07, 2025
Response Filed
Feb 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12562160
ARBITRATION BETWEEN AUTOMATED ASSISTANT DEVICES BASED ON INTERACTION CUES
2y 5m to grant Granted Feb 24, 2026
Patent 12547835
AUTOMATIC EXTRACTION OF SEMANTICALLY SIMILAR QUESTION TOPICS
2y 5m to grant Granted Feb 10, 2026
Patent 12512089
TESTING CASCADED DEEP LEARNING PIPELINES COMPRISING A SPEECH-TO-TEXT MODEL AND A TEXT INTENT CLASSIFIER
2y 5m to grant Granted Dec 30, 2025
Patent 12394416
DETECTING NEAR MATCHES TO A HOTWORD OR PHRASE
2y 5m to grant Granted Aug 19, 2025
Patent 11328007
GENERATING A DOMAIN-SPECIFIC PHRASAL DICTIONARY
2y 5m to grant Granted May 10, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
88%
With Interview (+14.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 775 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month