Prosecution Insights
Last updated: April 19, 2026
Application No. 18/148,358

END-TO-END NEUROMORPHIC ACOUSTIC PROCESSING

Non-Final OA §102§103
Filed
Dec 29, 2022
Examiner
BECKER, TYLER JUSTIN
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Intel Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
14 granted / 19 resolved
+11.7% vs TC avg
Strong +19% interview lift
Without
With
+19.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
22 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
23.1%
-16.9% vs TC avg
§103
45.4%
+5.4% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§102 §103
DETAILED ACTION This action is in response to the application filed on December 29th, 2022. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: Line 17 of [0017] of the specification reads “the device will response”, but should read “the device will respond”. Line 14 of [0021] of the specification reads “unless a s spike is emitted”, but should read “unless a spike is emitted”. Line 11 of [0026] of the specification reads “which depends on the microphone 220 and as its data as core inputs”, but should read “which depends on the microphone 220 and uses its data as core inputs”. Line 19 of [0055] of the specification reads “is appropriate configured”, but should read “is appropriately configured”. Line 20 of [0064] of the specification reads “[INVENTORS: Correct?]”. This appears to have been included erroneously, and should be removed if so. Appropriate correction is required. Claim Objections Claim 11 objected to because of the following informalities: Claim 11 reads "each neuromorphic . Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "spike generator" in claims 1-3, 16, and 17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Specifically, the term “spike generator” is being interpreted as corresponding to the description in [0057]-[0060] of the specification. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 9, 10, 12, 13, 16, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Khellah et al. (US Pat. Pub. No. 2019/0115011 A1 hereinafter Khellah). Regarding claim 1, Khellah discloses an apparatus comprising: a spike generator comprising hardware to generate a set of input spikes based on acoustic signal data generated by a microphone of a computing device (Khellah, Fig. 5, 504; [0057]: "At block 504, the processor transduces the audio into a plurality of spikes."; [0020]: "the system 100 may receive audio input from an audio source 102 and output a single output spike 112 corresponding to a detected keyword in the received audio input. For example, the audio source 102 may be a microphone."); a neuromorphic compute block to: implement a spiking neural network (SNN) (Khellah, [0030]: "In some examples, the spiking neural network 300 may be a sparsely active network (SAN). For example, a SAN may be a deep spiking neural network formed by an input layer 302, one or many hidden layers 304, and an output layer 306. The network architecture may include layers of spiking neurons 216, with each neuron 216 operating independently."; [0028]: "the spiking neural network 300 can be the spiking neural network 110 of the system of FIG. 1, the spiking neural network 636 of the computing device 600 of FIG. 6 below, or the spiking neural network module 712 of the computer readable media 700 of FIG. 7 below."; [0055]: "The example method is generally referred to by the reference number 500 and can be implemented in the system 100 of FIG. 1 above, the processor 702 of the computing device 700 of FIG. 7 below, or the computer readable media 800 of FIG. 8 below."); receive the set of input spikes as an input to the SNN (Khellah, Fig. 5, 506; [0058]: "At block 506, the processor sends one or more of the spikes to a spiking neural network."); generate a set of output spikes from the SNN based on the input; threshold logic to: determine that the set of output spikes correspond to a result of an acoustic recognition task; and generate result data to identify the result (Khellah, Fig. 5, 508; [0059]: "At block 508, the processor receives a spike corresponding to a detected keyword from the spiking neural network. For example, the single spike received from the output of the spiking neural network may correspond to a keyword or a key-phrase."; [0035]: "the example spiking neural network 300 can be implemented using fewer or additional components not illustrated in FIG. 3 (e.g., additional inputs, layers, spikes, outputs, etc.). For example, although one output node is shown for each key-phrase in the example of FIG. 3, in some examples, multiple output nodes may exist for each key phrase."). Regarding claim 9, the rejection of claim 1 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses wherein the acoustic recognition task comprises one of a wake-on-voice task, a keyword spotting task, an acoustic context awareness task, an acoustic event detection task, an instant speech detection tasks, or a dynamic noise suppression task (Khellah, [0016]: "The present disclosure relates generally to techniques for detecting keywords in audio using a spiking neural network."). Regarding claim 10, the rejection of claim 1 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses wherein the computing device comprises one of a laptop computing device, a smartphone device, a home monitor device, or a personal digital assistant device (Khellah, [0066]: "a block diagram is shown illustrating an example computing device that can detect keywords using a spiking neural network. The computing device 700 may be, for example, a laptop computer, desktop computer, tablet computer, mobile device, or wearable device, among others."). Regarding claim 12, Khellah discloses a method comprising: receiving a digital audio signal generated by a microphone (Khellah, [0020]: "the system 100 may receive audio input from an audio source 102 and output a single output spike 112 corresponding to a detected keyword in the received audio input. For example, the audio source 102 may be a microphone."); converting, using computing hardware, the digital audio signal into a train of input spikes (Khellah, Fig. 5, 504; [0057]: "At block 504, the processor transduces the audio into a plurality of spikes."; [0028]: "the spiking neural network 300 can be the spiking neural network 110 of the system of FIG. 1, the spiking neural network 636 of the computing device 600 of FIG. 6 below, or the spiking neural network module 712 of the computer readable media 700 of FIG. 7 below."; [0055]: "The example method is generally referred to by the reference number 500 and can be implemented in the system 100 of FIG. 1 above, the processor 702 of the computing device 700 of FIG. 7 below, or the computer readable media 800 of FIG. 8 below."); sending the train of input spikes to a spiking neural network (SNN) implemented in a neuromorphic computing device (Khellah, Fig. 5, 506; [0058]: "At block 506, the processor sends one or more of the spikes to a spiking neural network."); generating a set of output spikes as an output of the SNN based on the train of input spikes (Khellah, Fig. 5, 508; [0059]: "At block 508, the processor receives a spike corresponding to a detected keyword from the spiking neural network. For example, the single spike received from the output of the spiking neural network may correspond to a keyword or a key-phrase."; [0035]: "the example spiking neural network 300 can be implemented using fewer or additional components not illustrated in FIG. 3 (e.g., additional inputs, layers, spikes, outputs, etc.). For example, although one output node is shown for each key-phrase in the example of FIG. 3, in some examples, multiple output nodes may exist for each key phrase."); summing the set of output spikes to determine that a particular threshold is met; and generating a result of an acoustic recognition task based on meeting the particular threshold (Khellah, [0022]: "a neuron may be activated when a membrane potential exceeds a threshold value and send out a spike. As used herein, a membrane potential refers to a score associated with each neuron that can be modified by the spikes produced by other neurons. In some examples, when a neuron of the neural network fires, the neuron may generate information that travels to other neurons. The signal sent to other neurons, in turn, may increase or decrease the membrane potentials of the other neurons when the weight is positive or negative, respectively. As described in greater detail with respect to FIG. 2 below, the spikes from the spike transducer 108 can be input into an input layer of the spiking neural network 110. The spiking neural network 110 may be trained to output a single output spike 112. For example, the output spike 112 may correspond to a detected keyword. In some examples, a system can then use the corresponding detected keyword in any suitable application."). Regarding claim 13, the rejection of claim 12 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses offloading the acoustic recognition task from another processing device while the other processing device is in a low power mode (Khellah, [0017]: "The techniques described herein thus enable a low-power solution for keyword spotting using a sparsely active network based on spikes. The techniques described herein can be used to improve the ability of an always-on keyword spotting system with the ability to recognize more keywords. In particular, the spiking neural network may be a type of sparsely active neural (SAN) network that only processes data as needed. Such an event-driven approach may keep the system inactive unless there is a speech stimulus, thus reducing the power consumption during inactivity."). Regarding claim 16, Khellah discloses a system comprising: a processor; a memory (Khellah, [0066]: "The computing device 700 may include a central processing unit (CPU) 702 that is configured to execute stored instructions, as well as a memory device 704 that stores instructions that are executable by the CPU 702."); a microphone to generate digital acoustic data (Khellah, [0020]: "the system 100 may receive audio input from an audio source 102 and output a single output spike 112 corresponding to a detected keyword in the received audio input. For example, the audio source 102 may be a microphone."); a neuromorphic processing block comprising: a spike generator comprising circuitry to: receive the digital acoustic data; and generate a set of input spikes based on the digital acoustic data (Khellah, Fig. 5, 504; [0057]: "At block 504, the processor transduces the audio into a plurality of spikes."); a neuromorphic compute block coupled to: receive the set of input spikes from the spike generator (Khellah, Fig. 5, 506; [0058]: "At block 506, the processor sends one or more of the spikes to a spiking neural network."); provide the set of input spikes to a spiking neural network implemented in a network of neuromorphic cores of the neuromorphic compute block; generate output spikes based on the set of input spikes; threshold detection circuitry to determine, from the output spikes, that the output spikes indicate a particular result for an acoustic recognition task (Khellah, Fig. 5, 508; [0059]: "At block 508, the processor receives a spike corresponding to a detected keyword from the spiking neural network. For example, the single spike received from the output of the spiking neural network may correspond to a keyword or a key-phrase."; [0035]: "the example spiking neural network 300 can be implemented using fewer or additional components not illustrated in FIG. 3 (e.g., additional inputs, layers, spikes, outputs, etc.). For example, although one output node is shown for each key-phrase in the example of FIG. 3, in some examples, multiple output nodes may exist for each key phrase."). Regarding claim 20, the rejection of claim 20 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses a personal computing device to comprise the processor, memory, microphone, and neuromorphic processing block (Khellah, [0066]: "a block diagram is shown illustrating an example computing device that can detect keywords using a spiking neural network. The computing device 700 may be, for example, a laptop computer, desktop computer, tablet computer, mobile device, or wearable device, among others. In some examples, the computing device 700 may be a smart camera or a digital security surveillance camera. The computing device 700 may include a central processing unit (CPU) 702 that is configured to execute stored instructions, as well as a memory device 704 that stores instructions that are executable by the CPU 702. The CPU 702 may be coupled to the memory device 704 by a bus 706. Additionally, the CPU 702 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. Furthermore, the computing device 700 may include more than one CPU 702. In some examples, the CPU 702 may be a system-on-chip (SoC) with a multi-core processor architecture. In some examples, the CPU 702 can be a specialized digital signal processor (DSP) used for image processing. The memory device 704 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems. For example, the memory device 704 may include dynamic random access memory (DRAM)."). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 14, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khellah as applied to claims 1, 9, 10, 12, 13, 16, and 20 above, and further in view of Kim et al. (US Pat. Pub. No. 2016/0216751 A1 hereinafter Kim). Regarding claim 2, the rejection of claim 1 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses provide the acoustic signal data to the spike generator (Khellah, Fig. 5, 506; [0058]: "At block 506, the processor sends one or more of the spikes to a spiking neural network."). However, Khellah fails to expressly recite direct memory access (DMA) circuitry to: retrieve the acoustic signal data from memory of the computing device; and copy the result data to the memory. Kim teaches direct memory access (DMA) circuitry to: retrieve the acoustic signal data from memory of the computing device; and copy the result data to the memory (Kim, [0047]: "The main DMA controller 201 may directly access audio data stored in the storage unit 315, may communicate the audio data to the CPU 100 via the system bus 600, and may thereafter communicate decoded audio data from the CPU 100 to the storage unit 315."). Khellah and Kim are analogous arts because they each belong to the same field of audio processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah to incorporate the teachings of Kim to use direct memory access circuitry to retrieve and store data in the memory. A direct memory access circuit acts as a dedicated system that can access the device’s storage directly (Kim, [0047]). This allows the storage to be accessed even if a portion of the device is in a low power mode. Regarding claim 14, the rejection of claim 13 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses receiving a programming input to configure: the SNN to perform an inference related to the acoustic recognition task (Khellah, [0030]: "In some examples, the spiking neural network 300 may be a sparsely active network (SAN). For example, a SAN may be a deep spiking neural network formed by an input layer 302, one or many hidden layers 304, and an output layer 306. The network architecture may include layers of spiking neurons 216, with each neuron 216 operating independently."; [0032]: "In some examples, the execution of the SAN may include a transduction and an inference."). However, Khellah fails to expressly recite a first direct memory access (DMA) controller to copy the digital audio signal to memory while the other processing device is in the low power mode; and a second direct memory access (DMA) controller to retrieve the digital audio signal from the memory for the computing hardware while the other processing device is in the low power mode. Kim teaches a first direct memory access (DMA) controller to copy the digital audio signal to memory while the other processing device is in the low power mode (Kim, [0047]: "The main DMA controller 201 may directly access audio data stored in the storage unit 315, may communicate the audio data to the CPU 100 via the system bus 600, and may thereafter communicate decoded audio data from the CPU 100 to the storage unit 315."); and a second direct memory access (DMA) controller to retrieve the digital audio signal from the memory for the computing hardware while the other processing device is in the low power mode (Kim, [0059]: "The second DMA unit 450 may directly access the system memory unit 305 and communicate audio data stored in the stream buffer 330 to the audio buffer 460 under the control of the CPU 100 and/or control logic 430."). Khellah and Kim are analogous arts because they each belong to the same field of audio processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah to incorporate the teachings of Kim to use direct memory access circuitry to retrieve and store data in the memory. A direct memory access circuit acts as a dedicated system that can access the device’s storage directly (Kim, [0047]). This allows the storage to be accessed even if a portion of the device is in a low power mode. Regarding claim 17, the rejection of claim 13 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses provide the digital acoustic data to the spike generator (Khellah, Fig. 5, 506; [0058]: "At block 506, the processor sends one or more of the spikes to a spiking neural network."). However, Khellah fails to expressly recite a first direct memory access (DMA) controller external to the neuromorphic processing block to copy the digital acoustic data to the memory; a DMA controller in the neuromorphic processing block to: access the digital acoustic data from the memory; and write the particular result to the memory. Kim teaches a first direct memory access (DMA) controller external to the neuromorphic processing block to copy the digital acoustic data to the memory (Kim, [0047]: "The main DMA controller 201 may directly access audio data stored in the storage unit 315, may communicate the audio data to the CPU 100 via the system bus 600, and may thereafter communicate decoded audio data from the CPU 100 to the storage unit 315."); a DMA controller in the neuromorphic processing block to: access the digital acoustic data from the memory; and write the particular result to the memory (Kim, [0059]: "The second DMA unit 450 may directly access the system memory unit 305 and communicate audio data stored in the stream buffer 330 to the audio buffer 460 under the control of the CPU 100 and/or control logic 430."). Khellah and Kim are analogous arts because they each belong to the same field of audio processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah to incorporate the teachings of Kim to use direct memory access circuitry to retrieve and store data in the memory. A direct memory access circuit acts as a dedicated system that can access the device’s storage directly (Kim, [0047]). This allows the storage to be accessed even if a portion of the device is in a low power mode. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khellah, in view of Kim, as applied to claims 2, 14, and 17 above, and further in view of Krishnamurthy et al. (US Pat. Pub. No. 2019/0042910 A1 hereinafter Krishnamurthy). Regarding claim 3, the rejection of claim 2 is incorporated. Khellah, in view of Kim, discloses all of the elements of the current invention as stated above. However, Khellah, in view of Kim, fails to expressly recite an interconnect fabric to enable point-to-point communication between the DMA circuitry, the spike generator, and the neuromorphic compute block. Krishnamurthy teaches an interconnect fabric to enable point-to-point communication between the DMA circuitry, the spike generator, and the neuromorphic compute block (Krishnamurthy, [0032]: "Neuromorphic hardware implements SNNs as multi-core neuro-processors (e.g., neuro-synaptic cores, neural-cores, neural-core structures, etc.). Neuro-cores often implement several neurons that are colocated with synapse memory blocks to hold synapse weights. The colocation of the synapse memory on the core is used to overcome data-memory bandwidth bottlenecks. Generally, neural-cores are tiled and connected with a Network on Chip (NoC) or other interconnect fabric."; [0130]: "an interconnect unit(s) 2202 is coupled to: an application processor 2210 which includes a set of one or more cores 502A-N and shared cache unit(s) 1806; a system agent unit 1810; a bus controller unit(s) 1816; an integrated memory controller unit(s) 1814; a set or one or more coprocessors 2220 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 2230; a direct memory access (DMA) unit 2232; and a display unit 2240 for coupling to one or more external displays."). Khellah, Kim, and Krishnamurthy are analogous arts because they each belong to the same field of signal processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah, as modified by the audio processing system of Kim, to incorporate the teachings of Krishnamurthy to use an interconnect fabric to enable communication between different portions of the device. Using an interconnect fabric allows for many neuro-cores to be connected with each other and with other components (Krishnamurthy, [0032]). This ensures that the device components can communicate effectively and efficiently. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khellah, in view of Kim, as applied to claims 2, 14, and 17 above, and further in view of Park et al. (US Pat. Pub. No. 2016/0135047 A1 hereinafter Park). Regarding claim 15, the rejection of claim 14 is incorporated. Khellah, in view of Kim, discloses all of the elements of the current invention as stated above. Khellah further discloses waking the other processing device from the low power state based on the result (Khellah, [0017]: "The techniques described herein thus enable a low-power solution for keyword spotting using a sparsely active network based on spikes. The techniques described herein can be used to improve the ability of an always-on keyword spotting system with the ability to recognize more keywords. In particular, the spiking neural network may be a type of sparsely active neural (SAN) network that only processes data as needed. Such an event-driven approach may keep the system inactive unless there is a speech stimulus, thus reducing the power consumption during inactivity."); and performing additional processing of data using the other processing device based on the result (Khellah, [0059]: "In some examples, the processor may then send the spike to an application. For example, the application may be a voice controlled application. In some examples, the processor may activate an idle mode in response to generating the spike corresponding to the detected keyword."; [0066]: “In some examples, the CPU 702 can be a specialized digital signal processor (DSP) used for image processing.”). However, Khellah fails to expressly recite performing additional processing of audio data using the other processing device based on the result. Park teaches performing additional processing of audio data using the other processing device based on the result (Park, [0061]: "The DSP 241 in the wakeup mode receives the digital voice signal of the user from the memory 231. The DSP 241 determines whether to unlock the user terminal 200 based on the text extracted from the digital voice signal of the user through the voice recognition."). Khellah, Kim, and Park are analogous arts because they each belong to the same field of audio processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah, as modified by the audio processing system of Kim, to incorporate the teachings of Park to perform additional acoustic processing with a digital signal processor. Performing further processing with the audio data allows the system to take additional actions or derive additional information from the audio data, such as if the device should be unlocked based on the user’s voice or not (Park, [0061]). This ensures that the device can perform additional processing with the audio data once it has been activated from a low power mode. Claim(s) 4-6, 18, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khellah as applied to claims 1, 9, 10, 12, 13, 16, and 20 above, and further in view of Park. Regarding claim 4, the rejection of claim 1 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses wherein the computing device further comprises a digital signal processor (DSP) to perform at least one other task (Khellah, [0059]: "In some examples, the processor may then send the spike to an application. For example, the application may be a voice controlled application. In some examples, the processor may activate an idle mode in response to generating the spike corresponding to the detected keyword."; [0066]: “In some examples, the CPU 702 can be a specialized digital signal processor (DSP) used for image processing.”). However, Khellah fails to expressly recite wherein the computing device further comprises a digital signal processor (DSP) to perform at least one other acoustic recognition task. Park teaches wherein the computing device further comprises a digital signal processor (DSP) to perform at least one other acoustic recognition task (Park, [0061]: "The DSP 241 in the wakeup mode receives the digital voice signal of the user from the memory 231. The DSP 241 determines whether to unlock the user terminal 200 based on the text extracted from the digital voice signal of the user through the voice recognition."). Khellah and Park are analogous arts because they each belong to the same field of audio processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah to incorporate the teachings of Park to perform additional acoustic processing with a digital signal processor. Performing further processing with the audio data allows the system to take additional actions or derive additional information from the audio data, such as if the device should be unlocked based on the user’s voice or not (Park, [0061]). This ensures that the device can perform additional processing with the audio data once it has been activated from a low power mode. Regarding claim 5, the rejection of claim 4 is incorporated. Khellah, in view of Park, discloses all of the elements of the current invention as stated above. Khellah further discloses wherein the DSP is in an inactive state when the acoustic recognition task is performed by the apparatus (Khellah, [0017]: "The techniques described herein thus enable a low-power solution for keyword spotting using a sparsely active network based on spikes. The techniques described herein can be used to improve the ability of an always-on keyword spotting system with the ability to recognize more keywords. In particular, the spiking neural network may be a type of sparsely active neural (SAN) network that only processes data as needed. Such an event-driven approach may keep the system inactive unless there is a speech stimulus, thus reducing the power consumption during inactivity."). Regarding claim 6, the rejection of claim 5 is incorporated. Khellah, in view of Park, discloses all of the elements of the current invention as stated above. Khellah further discloses wherein the result is to trigger activation of the DSP (Khellah, [0059]: "In some examples, the processor may activate an idle mode in response to generating the spike corresponding to the detected keyword."). Regarding claim 18, the rejection of claim 16 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses digital signal processing logic executable by the processor to: identify the particular result (Khellah, Fig. 5, 508; [0059]: "At block 508, the processor receives a spike corresponding to a detected keyword from the spiking neural network. For example, the single spike received from the output of the spiking neural network may correspond to a keyword or a key-phrase."). However, Khellah fails to expressly recite perform further acoustic recognition tasks based on acoustic data generated by the microphone and the particular result. Park teaches perform further acoustic recognition tasks based on acoustic data generated by the microphone and the particular result (Park, [0061]: "The DSP 241 in the wakeup mode receives the digital voice signal of the user from the memory 231. The DSP 241 determines whether to unlock the user terminal 200 based on the text extracted from the digital voice signal of the user through the voice recognition."). Khellah and Park are analogous arts because they each belong to the same field of audio processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah to incorporate the teachings of Park to perform additional acoustic processing with a digital signal processor. Performing further processing with the audio data allows the system to take additional actions or derive additional information from the audio data, such as if the device should be unlocked based on the user’s voice or not (Park, [0061]). This ensures that the device can perform additional processing with the audio data once it has been activated from a low power mode. Regarding claim 19, the rejection of claim 16 is incorporated. Khellah discloses all of the elements of the current invention as stated above. Khellah further discloses the neuromorphic processing block is to perform the acoustic recognition task when the DSP is in a low power mode (Khellah, [0017]: "The techniques described herein thus enable a low-power solution for keyword spotting using a sparsely active network based on spikes. The techniques described herein can be used to improve the ability of an always-on keyword spotting system with the ability to recognize more keywords. In particular, the spiking neural network may be a type of sparsely active neural (SAN) network that only processes data as needed. Such an event-driven approach may keep the system inactive unless there is a speech stimulus, thus reducing the power consumption during inactivity."). However, Khellah fails to expressly recite wherein the processor comprises a digital signal processor (DSP), the DSP is to perform the acoustic recognition task in a full power mode. Park teaches wherein the processor comprises a digital signal processor (DSP), the DSP is to perform the acoustic recognition task in a full power mode (Park, [0061]: "The DSP 241 in the wakeup mode receives the digital voice signal of the user from the memory 231. The DSP 241 determines whether to unlock the user terminal 200 based on the text extracted from the digital voice signal of the user through the voice recognition."). Khellah and Park are analogous arts because they each belong to the same field of audio processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah to incorporate the teachings of Park to perform additional acoustic processing with a digital signal processor. Performing further processing with the audio data allows the system to take additional actions or derive additional information from the audio data, such as if the device should be unlocked based on the user’s voice or not (Park, [0061]). This ensures that the device can perform additional processing with the audio data once it has been activated from a low power mode. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khellah, in view of Park, as applied to claims 4-6, 18, and 19 above, and further in view of Krishnamurthy. Regarding claim 7, the rejection of claim 4 is incorporated. Khellah, in view of Park, discloses all of the elements of the current invention as stated above. However, Khellah, in view of Park, fails to expressly recite wherein the apparatus is a neuromorphic acoustic processing block and is coupled to the DSP by an interconnect. Krishnamurthy teaches wherein the apparatus is a neuromorphic acoustic processing block and is coupled to the DSP by an interconnect (Krishnamurthy, [0032]: "Neuromorphic hardware implements SNNs as multi-core neuro-processors (e.g., neuro-synaptic cores, neural-cores, neural-core structures, etc.). Neuro-cores often implement several neurons that are colocated with synapse memory blocks to hold synapse weights. The colocation of the synapse memory on the core is used to overcome data-memory bandwidth bottlenecks. Generally, neural-cores are tiled and connected with a Network on Chip (NoC) or other interconnect fabric."; [0130]: "an interconnect unit(s) 2202 is coupled to: an application processor 2210 which includes a set of one or more cores 502A-N and shared cache unit(s) 1806; a system agent unit 1810; a bus controller unit(s) 1816; an integrated memory controller unit(s) 1814; a set or one or more coprocessors 2220 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 2230; a direct memory access (DMA) unit 2232; and a display unit 2240 for coupling to one or more external displays."; [0132]: "For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor."). Khellah, Park, and Krishnamurthy are analogous arts because they each belong to the same field of signal processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah, as modified by the unlocking method of Park, to incorporate the teachings of Krishnamurthy to use an interconnect fabric to enable communication between different portions of the device. Using an interconnect fabric allows for many neuro-cores to be connected with each other and with other components (Krishnamurthy, [0032]). This ensures that the device components can communicate effectively and efficiently. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khellah as applied to claims 1, 9, 10, 12, 13, 16, and 20 above, and further in view of van der Made et al. (US Pat. Pub. No. 2017/0229117 A1 hereinafter van der Made). Regarding claim 8, the rejection of claim 1 is incorporated. Khellah discloses all of the elements of the current invention as stated above. However, Khellah fails to expressly recite wherein the spike generator comprises a cochlear fixed function block to model function of a biological ear. Van der Made teaches wherein the spike generator comprises a cochlear fixed function block to model function of a biological ear (van der Made, [0030]: "The present invention mimics the biological auditory system and thus has higher hit rate performance than existing solutions. As the neuromorphic voice activation system 200 is specifically focused on detecting speech activity, it provides a better performance in noisy environment."). Khellah and van der Made are analogous arts because they each belong to the same field of audio processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah to incorporate the teachings of van der Made to include a cochlear fixed function block to model function of a biological ear. This focuses the system to detect speech activity, which provides better performance in a noisy environment (van der Made, [0030]). This ensures a good user experience even in noisy environments. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khellah as applied to claims 1, 9, 10, 12, 13, 16, and 20 above, and further in view of Andreopoulos et al. (US Pat. Pub. No. 2022/0164970 A1 hereinafter Andreopoulos). Regarding claim 11, the rejection of claim 1 is incorporated. Khellah discloses all of the elements of the current invention as stated above. However, Khellah fails to expressly recite wherein the neuromorphic compute block comprises a network of interconnected neuromorphic cores, each neuromorphic cores core is to implement a subset of a plurality of neurons in the SNN, and the neuromorphic compute block comprises a set of internal routers to route spike messages between the plurality of neurons during operation of the SNN. Andreopoulos teaches wherein the neuromorphic compute block comprises a network of interconnected neuromorphic cores, each neuromorphic cores core is to implement a subset of a plurality of neurons in the SNN, and the neuromorphic compute block comprises a set of internal routers to route spike messages between the plurality of neurons during operation of the SNN (Andreopoulos, Fig. 3, [0030]: "In some embodiments a plurality of neurosynaptic cores are tiled on a chip. In an exemplary embodiment, a 64 by 64 grid of cores is tiled, yielding 4,096 cores, for a total of 1,048,576 neurons and 268,435,456 synapses. In such embodiments, neurons, synapses, and short-distance connectivity are implemented by the core circuit. Long-distance connectivity is logical. An exemplary embodiment is depicted in FIG. 3. Mesh router 301 provides communication between cores. Also on a given core, neuron to core 302 and core to axon 303 communication links are provided."). Khellah and Andreopoulos are analogous arts because they each belong to the same field of neuromorphic processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the spiking neural network of Khellah to incorporate the teachings of Andreopoulos to include a network of interconnected neuromorphic cores and routers. This provides a low power solution for solving big data problems (Andreopoulos, [0017]). As such, the system is able to be used in small form factor devices or while a device is operating in a low power mode. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zjajo et al. (US Pat. Pub. No. 2022/0230051 A1) discloses a spiking neural network. Yilmaz et al. (WO Pat. Pub. No. 2022/035379 A1) discloses a system for detecting keywords using a spiking neural network. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TYLER J BECKER whose telephone number is (703)756-1271. The examiner can normally be reached M-Th, 7:15am-5:45pm PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER BECKER/ Examiner, Art Unit 2657 /DANIEL C WASHBURN/ Supervisory Patent Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Feb 24, 2023
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597433
SPEECH SIGNAL ENHANCEMENT METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12585893
Full Media Translator
2y 5m to grant Granted Mar 24, 2026
Patent 12518777
SYSTEMS AND METHODS FOR AUTHENTICATION USING SOUND-BASED VOCALIZATION ANALYSIS
2y 5m to grant Granted Jan 06, 2026
Patent 12499869
SOUND SYNTHESIS METHOD, SOUND SYNTHESIS APPARATUS, AND RECORDING MEDIUM STORING INSTRUCTIONS TO PERFORM SOUND SYNTHESIS METHOD
2y 5m to grant Granted Dec 16, 2025
Patent 12499311
Language Model Preprocessing with Weighted N-grams
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month