DETAILED ACTION
This office action is in response to the Application No. 17246219 filed on
10/27/2025. Claims 1-25 are presented for examination and are currently pending. Applicant’s arguments have been carefully and respectfully considered.
Allowable Subject Matter
2. Claims 11, 14 and 18 are objected to as being dependent upon a rejected base
claim, but would be allowable if rewritten in independent form including all of the
limitations of the base claim and overcomes the 112(b) rejection.
Response to Arguments
3. On page 10 of the remarks, the Applicant argued that “Applicant respectfully submits that the Examiner has not provided an explanation or reasoning to support the conclusion that one of ordinary skill in the art would be unable to understand what the term "about 0.5" means in the context of claims 1, 2, and 20 and in light of the specification. As discussed in MPEP 2173.05(b), "relatively terminology ... does not automatically render the claim indefinite," and "[a]cceptability of the claim language depends what one of ordinary skill in the art would understand what is claimed, in light of the specification”.
Further on page 11 of the remarks, the Applicant argued that “Accordingly, it was improper for the Examiner to conclude that the alleged relative terminology (i.e., "about 0.5") renders a claim indefinite without explaining why the terminology would be unclear to one of ordinary skill in the art is improper, especially without specifically considering the context of the claim and the above-referenced disclosure in the specification, Accordingly, for at least the reasons discussed above, Applicant respectfully traverses the rejection of claims 1, 2, and 20 under § 112(b), and requests that the rejections be withdrawn”.
The arguments above are not persuasive because [0145] of the instant specification does not provide a definition for the limitation “about 0.5” recited in the independent claims. Furthermore, the instant specification discloses “initialization values can be about 0.5 (e.g., within about 10% of 0.5)” [0145], but the Applicant is reminded that this is an example and not a definition. As a suggestion, the Applicant might claim the initialization values within 10% of 0.5 and this would overcome the 112(b) rejection.
As a result, the indefiniteness rejection is proper, and the limitation “about 0.5” is relative and would be unclear to a person of ordinary skill in the art.
On page 13-14 of the remarks, the Applicant argued that “As discussed in MPEP 2173.01(I), "a claim must be given its broadest reasonable interpretation consistent with the specification as it would be interpreted by one of ordinary skill in the art, Applicant respectfully submits that the Examiner's interpretation of 0.8 as being "about 0.5" is not consistent with the specification (see, e.g., paragraph [0145] of Applicant's specification)”.
Furthermore, the Applicant argued on page 14 that “Accordingly, Applicant reiterates the arguments from the Previous Response that the cited recitation in Dong, which states that weights of a convolutional layer were initialized with random values sampled from a Gaussian distribution with mean of 0.8, does not disclose or suggest "wherein a mean of the plurality of initialization values is about 0.5," as recited in independent claim 1. Additionally, Dong does not disclose or suggest any rationale for modifying the values used to initialize the weights of the convolution layer to have a mean of 0.5, nor does anything in Smith disclose or suggest making such a modification”.
The argument above is not persuasive because as noted above that the instant specification disclosure of “initialization values can be about 0.5 (e.g., within about 10% of 0.5)” [0145], is not a definition but an example. As a result, the limitation “about 0.5” is relative. Furthermore, according to MPEP 2111.01(II) “Though understanding the claim language may be aided by explanations contained in the written description, it is important not to import into a claim limitations that are not part of the claim. For example, a particular embodiment appearing in the written description may not be read into a claim when the claim language is broader than the embodiment”. As a result, Dong’s teaching of initialized with random values sampled from a Gaussian distribution with mean of 0.8 approximately reads on the claimed “initialization values can be about 0.5”.
Furthermore, it would have been obvious to a person having ordinary skill in that art to have used Dong’s initialized with random values sampled from a Gaussian distribution with mean of 0.8 and standard deviation of 0.05 to modify Smith for the benefit of training a SNN that has high-speed and low-energy data processing taught by Dong.
On page 14 of the remarks, the Applicant argued that “Accordingly, for at least the reasons discussed below, Applicant respectfully submits that the cited portion of Dong, which discusses initialization of weights in a convolutional layer at the beginning of training, is unrelated to initialization values of neurons of the SNN of Dong, either during training, or after training is completed”.
The argument above is not persuasive because in a spiking neural network, each neuron is associated with an edge which has weight value, and as a result, the claimed limitation of “each of the plurality of neurons associated with a respective initialization value V0” can be broadly interpreted as each neuron is associated with an edge which initialized weight value. As a result, it would have been obvious to a person having ordinary skill in the art that the neurons of Dong as secondary reference, which are associated with edges which has initialized weight values with mean of 0.8 and standard deviation of 0.05, can be used in modifying the initialization at 0 taught by primary reference Smith.
Furthermore, on page 15 of the remarks, the Applicant argued that “In other words, the body potential referenced in paragraph [0098] cited by the Examiner is a value in the neuron model that determines whether to generate an output spike, and paragraph [0098] indicates that these values are assumed to be set at 0 before each input volley is received. There does not appear to be any disclosure or suggestion in Smith of using a non-zero initialization value for the body potential of the neuron. Accordingly, in order to cure the deficiencies of Smith, the Examiner must show that Dong, or some other reference, discloses initialization values for a value in a neuron model that is equivalent to the neuron body/membrane potential of Smith”.
On page 17 of the remarks, the Applicant argued that “In other words, if anything in Dong is relevant to the initialization values of the neurons of the SNN in Smith (which Applicant does not concede), it is the value of Vrest, which is equal to zero. Smith also states that the initial value of the neuron's body potential is zero. Accordingly, Dong does not cure the deficiencies of Smith, because Dong also does not disclose "a mean of the plurality of initialization values is about 0.5, and a standard deviation of the initialization values is at least 0.05," as recited in independent claim 1”.
The argument above is not persuasive because Smith actually suggests using a non-zero initialization value for the body potential of the neuron. Smith teaches the synapses on input lines are associated with the neuron body that they feed. That is, a modeled neuron includes both the neuron body and the synapses [0095]… neurons may have more than one synapse with non-zero weight, but all the non-zero weight synapses are confined to a narrow range [0155]). This indicates Smith suggests weight synapses which are edges associated with neurons can initialized to nonzero values.
On page 17 of the remarks, the Applicant argued that “Accordingly, this portion of Dong cannot cure the deficiencies of Smith, because it is related to initial weights using during training, not an initialized membrane potential value associated with neuron, which both Dong and Smith explicit state are set at zero (i.e., in Dong Vrest= 0, and in Smith the neuron's body potential is assumed to be 0 at the beginning of each input volley). Accordingly, for at least the reasons discussed above, Applicant respectfully submits that neither Smith nor Dong, either alone or in combination, discloses or suggests Applicant's independent claim 1, which is therefore allowable”.
The argument above is not persuasive and claim 1 is not allowable because the claimed limitation, “each of the plurality of neurons associated with a respective initialization value V0” can be broadly interpreted as each neuron is associated with an edge which initialized weight value. Furthermore, Smith teaches the synapses on input lines are associated with the neuron body that they feed [0095], neurons may have more than one synapse with non-zero weight, but all the non-zero weight synapses are confined to a narrow range [0155].
On page 18 of the remarks, the Applicant argued that “Similarly, independent claims 2 and 20, which each include some features similar to features of independent claim 1 discussed above, are allowable for at least the same reasons that claim 1 is allowable. Finally, dependent claims 3-19 and 21-25, each of which depends on at least one of independent claims 2 and 20, are allowable for at least the same reasons that claims 2 and 20 are allowable”.
The arguments above are not persuasive, and the rejected independent claims and rejected dependent claims are obvious over the prior art of record, and are therefore not allowable.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
4. Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term "about 0.5" in claims 1, 2 and 20 is a relative term which renders the claim indefinite. The term " about 0.5" is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The mean of the plurality of values that is about 0.5 is not clear.
Claims 3-19, 21 and 22 are not specifically mentioned are rejected due to dependency.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1, 2, 6, 9, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596).
Regarding claim 1, Smith teaches a method for using a spiking neural network with improved efficiency (The multi-layer spiking neural networks that are the subject of the claimed invention are trained from the input layer proceeding one layer at a time to the output layer … This method does not require the repeated application of the same training patterns, and consequently is much faster and computationally far more efficient than methods used in traditional artificial neural networks [0028]),
the method (In contrast, FIG. 2 illustrates the method of the claimed invention wherein spike timing relationships across multiple lines convey information [0064]) comprising:
receiving image data (For example, the pattern may be a raw image expressed as pixels or it may be a pre-processed image [0066]; The structures and methods as described herein can be implemented in a number of ways: directly in special purpose hardware, in programmable hardware as an FPGA, or via software on a general purpose computer or graphics processor [0221]. The Examiner notes the images are received by a component of hardware);
providing the image data to a trained spiking neural network (SNN) (As one example, the input could be visual images [0204]; The multi-layer spiking neural networks that are the subject of the claimed invention are trained from the input layer proceeding one layer at a time to the output layer [0028]),
the SNN comprising a plurality of neurons (However, in the biological system, and in the SNNs described here, there are also inhibitory neurons which have the opposite effect of excitatory neurons [0126]),
each of the plurality of neurons associated with a respective initialization value V0 of a plurality of initialization values (Before each input volley, it is assumed that the value of the neuron's body potential is quiescent (initialized at 0) [0098]),
wherein a first layer of the trained SNN comprises a first subset of the plurality of neurons (a hierarchical classification system is illustrated in FIG. 26. The input to the first layer of the classifier are 5×5 receptive fields (RFs), each of which is a subimage of the full 28×28 image [0217]; Each of the RFs is processed by a trained classifier consisting of 10 neurons [0218]), and
a second layer of the trained SNN comprises a second subset of the plurality of neurons (At layer 2, overlapping regions from layer 1 are processed. For example a 2×2 set of RF classifier outputs, represented as four 10 line bundles, are merged to form a single 40 line volley. This volley forms the input to a layer 2 classifier [0218]; At each step, all the ECs in a layer is trained, then the evaluation set is applied to produce the training set for the ECs in the next layer [0235]), and
receiving output (In FIG. 3, bundles are labeled Xi at the inputs and Zi at the outputs. In FIG. 3, bundles are labeled Xi at the inputs and Zi at the outputs [0071]; In Figure 3, the Examiner notes the outputs received are Z1 and Z2) from the trained SNN (A simple block diagram of an overall spiking neural network architecture is illustrated in FIG. 3 [0066]) at a time step τ (By definition, this single spike output volley is normalized, so any preliminary output spike becomes z1=Zmax (0 or Vmax), depending on whether a value volley or time volley is used [0178]; Then the normalized output time volley is Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t. [0177]; The Examiner notes t1−tmin as time step τ),
wherein the output is based on activations of neurons in an output layer of the trained SNN (Multi-connection paths between pairs of neurons are modeled as a single compound path. Multi-layer networks are trained from the input layer proceeding one layer at a time to the output layer (abstract)), and
wherein τ is in a range of 1 to T (In a normalized volley, Tmax is the latest time at which a spike may occur, so 0≦ti≦Tmax. Tmax defines the maximum extent of a temporal frame of reference; i.e., the time interval over which all spikes in the volley must occur. [0078]; Figure 5 shows time steps 0, 1, 2, 3, 4, 5, 6, 7, and 8 wherein Tmax = 8); and
classifying the image data based on output of the trained SNN at time step τ (The output of this classifier is the final output, with 10 lines, where the first line to spike indicates the class to which the input pattern belongs [0218]; In FIG. 3, bundles are labeled Xi at the inputs and Zi at the outputs [0071]; Using labels associated with the training data, the N output neurons are trained in a supervised manner to spike for the class indicated by the label [0150]; Then the normalized output time volley is Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t. [0177]; The Examiner notes that Zi is an output label represents the class predicted for a given input Xi, and Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t are time steps and t1−tmin as time step τ).
Smith does not explicitly teach wherein a mean of the plurality of initialization values is about 0.5, and standard deviation of the initialization values is at least 0.05;
Dong teaches wherein a mean of the plurality of initialization values is about 0.5 and a standard deviation of the initialization values is at least 0.05 (The weights of convolutional layer were initialized with random values sampled from a Gaussian distribution with mean of 0.8 and standard deviation of 0.05, pg. 8, first para.);
Since Smith desires a SNN training that is much faster and computationally far more efficient [0028], and Dong teaches a SNN training that becomes more efficient (pg. 6, first para.) a trained SNN capable of high-speed and low-energy data processing (page 4, last paragraph), then, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Smith with the teachings of Dong for the benefit of training a SNN that has high-speed and low-energy data processing (Dong, pg. 4, last para.).
Regarding claim 2, claim 2 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further Smith teaches performing a task associated with the data based on output of the trained SNN at time step τ (A related task is classification, where training patterns have labels which indicate a pre-specified cluster (or class) to which a given training pattern belongs…Using labels associated with the training data, the N output neurons are trained in a supervised manner to spike for the class indicated by the label [0150]; The output of this classifier is the final output, with 10 lines, where the first line to spike indicates the class to which the input pattern belongs [0218]; In FIG. 3, bundles are labeled Xi at the inputs and Zi at the outputs [0071]; Using labels associated with the training data, the N output neurons are trained in a supervised manner to spike for the class indicated by the label [0150]; Then the normalized output time volley is Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t. [0177]; The Examiner notes that Zi is an output label represents the class predicted for a given input Xi, and Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t are time steps and t1−tmin as time step τ).
Regarding claim 6, Modified Smith teaches the method of claim 2, Smith teaches wherein data comprises image data comprising an array of pixels each associated with a value (For example, the pattern may be a raw image expressed as pixels or it may be a pre-processed image [0066]; Another example is simple gray scale images where the lighter the pixel value, the higher the value (and conversely, the darker the pixel, the lower the value) [0205]), and
providing the data to the trained SNN comprises: generating, for each pixel, a spike train based on the value associated with the pixel (If there are a total of n input elements (e.g., pixels in a grayscale image), then the value of each of the elements is translated into to spikes belonging to a volley [0206]),
wherein spikes are generated at a rate that is proportional to the value associated with the pixel; and providing, to each neuron of a plurality of neurons in an input layer of the trained SNN (In one application, the spiking neural network composed of CCs (Computational Column) may perform classification, where the input patterns are placed into one of a number of disjoint classes based on similarity. For example, if the input patterns are formed from hand written numerals 0-9 in grayscale pixel form [0069]),
a spike train associated with a respective pixel of the plurality of pixels (Fig. 1 denotes spike trains; If there are a total of n input elements (e.g., pixels in a grayscale image), then the value of each of the elements is translated into to spikes belonging to a volley [0206]).
Regarding claim 19, Modified Smith teaches the method of claim 2, Smith teaches wherein the data comprises time-series data (One computation method is to simply step through time incrementally, one c time unit per step [0183]; a levelized computational model based on abstract time steps at the higher level, and the local temporal frames of reference are correctly supported by the underlying implementation infrastructure [0090]).
Regarding claim 20, claim 20 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Smith teaches a system for using a spiking neural network with improved efficiency, the system comprising: at least one processor that is configured to (The structures and methods as described herein can be implemented in a number of ways: directly in special purpose hardware, in programmable hardware as an FPGA, or via software on a general purpose computer or graphics processor [0221]):
6. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) and further in view of Lee et al. ("Enabling spike-based backpropagation for training deep neural network architectures." Frontiers in neuroscience 14 (2020): 497482).
Regarding claim 3, Modified Smith teaches the method of claim 2, Modified Smith does not explicitly teach wherein the time step τ is determined based on a latency-optimized timing schedule established during refinement of the trained SNN, and wherein the output is indicative of a neuron in the output layer that had the most activations up to time T
Lee teaches wherein the time step τ is determined based on a latency-optimized timing schedule (To obtain the optimal #time-steps required for our proposed training method, we trained VGG9 networks on CIFAR-10 dataset using different time-steps ranging from 10 to 120 (shown in Figure 6A) (pg. 12, right col., last para. to pg. 13, left col,. first para.); Next, we construct our networks by leveraging frequently used architectures such as VGG and ResNet. To the best of our knowledge, this is the first work that demonstrates spike-based supervised BP learning for SNNs containing more than 10 trainable layers, pg. 16, left col., second para.) established during refinement of the trained SNN (The Figure 9 shows the relationship between inference accuracy, latency and #spikes/inference for ResNet11 networks trained on CIFAR-10 dataset, pg. 16, right col., second to the last para.), and
wherein the output is indicative of a neuron in the output layer that had the most activations up to time T (Whenever the membrane potential exceeds the firing threshold (Vth), the post-neuron in the output feature map spikes, pg. 4, Fig. 2)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Lee for the benefit of training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation which achieves the best classification accuracies in MNIST, SVHN, and CIFAR-10 datasets (Lee, abstract).
7. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) in view of Lee et al. ("Enabling spike-based backpropagation for training deep neural network architectures." Frontiers in neuroscience 14 (2020): 497482) and further in view of Ruckauer et al. (US20190122110)
Regarding claim 4, Modified Smith teaches the method of claim 3, Smith teaches wherein the data comprises image data (For example, the pattern may be a raw image expressed as pixels or it may be a pre-processed image [0066]),
wherein the task comprises a computer vision task that includes classification of the image data (As one example, the input could be visual images. These images may undergo some form of filtering or edge detection [0204]; a hierarchical classification system is illustrated in FIG. 26. The input to the first layer of the classifier are 5×5 receptive fields (RFs), each of which is a subimage of the full 28×28 image [0217]), and
wherein the neuron in the output layer up to time step τ corresponds to a first class of a plurality of classes (Denote the training volley set as R; then there are a total of |R| volleys in the training set. For a given input line i (which feeds a compound synapse), let ti be the spike time within a given training volley. The weight associated with this spike is (Tmax−ti)/γ. The overall weight for the synapse associated with line i is then the average of the associated weights: Σ(Tmax−ti)/(γ|R|) [0192]; The output of this classifier is the final output, with 10 lines, where the first line to spike indicates the class to which the input pattern belongs [0218]; If there are N classes, then the CC classifier contains N neurons, one associated with each class. Using labels associated with the training data, the N output neurons are trained in a supervised manner to spike for the class indicated by the label [0150])
Modified Smith does not explicitly teach wherein the neuron in the output layer that had the most activations up to time step τ.
Ruckauer teaches wherein the neuron in the output layer that had the most activations up to time step τ (and neurons with activations greater than or equal to a predetermined threshold are taken into consideration [0093]; To determine first timing information ti (0) about a timing at which the i-th neuron first fires, a neuron potential at ti (0) is set to a neuron potential threshold θ, which is expressed as ui(ti (0)) =. Accordingly, for ti (0), Equation 4 is expressed as shown in Equation 5 below. [0075]; … the output layer outputs a sufficiently accurate value or label [0055]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Ruckauer for the benefit of a neural network conversion operation, in which an activation of a neuron of an ANN is matched to a reciprocal of information about a timing at which a first spike is to be generated by a neuron of an SNN, thus advantageously reducing the number of computations performed by the SNN (Ruckauer [0061])
8. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) and further in view of Rhodes et al. ("Real-time cortical simulation on neuromorphic hardware." Philosophical Transactions of the Royal Society A 378.2164 (2020): 20190160).
Regarding claim 5, Modified Smith teaches the method of claim 2, Smith teaches further comprising: receiving output from the trained SNN at a time step τ' subsequent to time step τ (Then the normalized output time volley is Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t. [0177]; The Examiner notes that Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t as output at different time steps and t2−tmin as time step τ' subsequent to t1−tmin (τ));
performing the task based on output of the trained SNN at step time τ' (In one application, the spiking neural network composed of CCs may perform classification” [0069]; As shown in FIG. 4, a CC is further composed of an Excitatory Column, (EC) [0074]; This process of 2×2 reduction proceeds until a single classifier remains. The output of this classifier is the final output, with 10 lines, where the first line to spike indicates the class to which the input pattern belongs [0218]; In one implementation of training, all weights are initially set at zero. The first volley of the training set is applied, and an output spike is forced at all neuron outputs with a spike time close to, and before, the maximum time T.sub.max [0112]; A related task is classification, where training patterns have labels which indicate a pre-specified cluster (or class) to which a given training pattern belongs. Then, after training, arbitrary patterns can be evaluated by the SNN to determine the class to which they belong [0093]; Then the normalized output time volley is Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t. [0177]; The Examiner notes that Zn=<t1−tmin, t2−tmin, . . . tn−tmin>t as output at different time steps and t2−tmin as time step τ' subsequent to t1−tmin (τ)).
wherein the time step τ and the time step τ' are each determined based on confidence thresholds (At the time the sum of the spike responses first reaches a threshold value (denoted as θ), the neuron emits an output spike [0097]) corresponding to such time steps (The threshold is crossed at time t, then the spike in the preliminary output volley is assigned to be t. [0183]) in a predefined latency-accuracy curve (Figure 23; The next time this input pattern is re-presented, firing threshold will be reached sooner which implies a slight decrease of the post-synaptic spike latency….By iteration, it follows that upon repeated presentation of the same input spike pattern, the post-synaptic spike latency will tend to stabilize at a minimal value [0121])
Modified Smith does not explicitly teach and wherein T is a time step at which the SNN settles at a steady state, and outputs provided at time steps T and T' represent outputs from the SNN while the SNN is in a transient state during which firing rates and accuracy change over time before the steady state is achieved at T;
Rhodes teaches wherein T is a time step at which the SNN settles at a steady state (Simulations are performed for a total of 10s with a simulation time step of t=0.1ms for accuracy of produced spike times. Simulation output is split into an initial transient followed by steady-state activity, pg. 3, last para.; Note that for SpiNNaker simulations of the cortical microcircuit recorded data comprises output spike times of all model neurons (pg. 10, last para.); the total spikes produced for each simulation time step are plotted in figure 2 (both plots produced from baseline NEST simulation results). This demonstrates significant variations in counts of model spikes per time step from the initial transient phase of the simulation through to steady-state, pg. 6, first para.)), and
outputs provided at time steps T and T' represent outputs from the SNN (Figure 2. Analysis of cortical microcircuit output activity simulated with NEST: total, excitatory and inhibitory, spikes produced per simulation timestep. Left inset shows an initial transient response, while right inset details steady-state oscillations (Online version in colour.), pg. 6; The right inset of figure 2 shows results from 1145<t<1170ms, pg. 6, first para.) while the SNN is in a transient state during which firing rates (while figure 1c shows the mean layer-wise firing rates for the model, the total spikes produced for each simulation time step are plotted in figure 2 (both plots produced from baseline NEST simulation results). This demonstrates significant variations in counts of model spikes per time step from the initial transient phase of the simulation through to steady-state, pg. 6, first para.)
and accuracy change over time (Simulations are performed for a total of 10s with a simulation time step of t=0.1ms for accuracy of produced spike times, pg. 3, last para.) before the steady state is achieved at T (Figure 1. Cortical microcircuit model: … (b)0.4s of steady state output spikes (5% of total spikes plotted for clarity), pg. 3);
9. Claims 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) in view of Chen et al. (US20210011161 filed 07/09/2019) and further in view of Tang et al. ("Spike counts based low complexity SNN architecture with binary synapse." IEEE Transactions on Biomedical Circuits and Systems 13.6 (2019): 1664-1677).
Regarding claim 7, Modified Smith teaches the method of claim 2, Smith teaches wherein the data comprises image data comprising a plurality of spike streams (As has been stated, similar spike volleys represent similar input data patterns [0212]; As one example, the input could be visual images. These images may undergo some form of filtering or edge detection, with the output of the translation stage as spike volleys [0204]; For example, the pattern may be a raw image expressed as pixels or it may be a pre-processed image)
Modified Smith does not explicitly teach generated by an imaging device, but wherein the activations in the output layer of the trained SNN due to the plurality of spike streams does not dynamically determine time step r during operation of the SNN.
Chen teaches generated by an imaging device (The vehicle 200 can include a camera, possibly at a location inside sensor unit 202. The camera can be a photosensitive instrument, such as a still camera, a video camera, etc., that is configured to capture a plurality of images of the environment of the vehicle 200 [0073]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Chen for the benefit of a LIDAR device which may determine the distances by projecting light pulses onto the environment and detecting corresponding return light pulses reflected from the various points within the environment, the intensity of each of the return light pulses may be measured by the LIDAR device and represented as a waveform that indicates the intensity of detected light over time (Chen [0024]).
Modified Smith does not explicitly teach but wherein the activations in the output layer of the trained SNN due to the plurality of spike streams does not dynamically determine time step τ during operation of the SNN.
Tang teaches but wherein the activations in the output layer of the trained SNN due to the plurality of spike streams does not dynamically determine time step τ during operation of the SNN (The basic idea is that since the precise timings of output spikes are not critical to make classification outputs in a rate-coding based SNN, pg. 1669, left col., first para.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Tang for the benefit of a SNN processor in learning mode, processes 79 input spikes per each time step in 3,024 clock cycles on average and consumes average 1.42 pJ/SOP with a throughput of 4.0 GSOP/s which is one of the lowest energy/ classification and energy/SOP (Tang, pg. 1675, left col., last para.)
Regarding claim 8, Modified Smith teaches the method of claim 7, Modified Smith does not explicitly teach wherein the imaging device comprises a light detection and ranging (LiDAR) device.
Chen teaches wherein the imaging device (The vehicle 200 can include a camera, possibly at a location inside sensor unit 202. The camera can be a photosensitive instrument, such as a still camera, a video camera, etc., that is configured to capture a plurality of images of the environment of the vehicle 200 [0073]) comprises
a light detection and ranging (LiDAR) device (These sensors may include a light detection and ranging (LIDAR) device [0024])
The same motivation to combine dependent claim 7 applies here.
10. Claims 9, 17 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) and further in view of Ruckauer et al. (US20190122110)
Regarding claim 9, Modified Smith teaches the method of claim 2, Modified Smith does not explicitly teach wherein the trained SNN was generated based on a trained analog neural network (ANN).
Ruckauer teaches wherein the trained SNN was generated based on a trained analog neural network (ANN) (acquiring connection weight of an analog neural network (ANN) node of a pre-trained ANN [0005]; In an example implementation of an SNN of the present disclosure a pre-trained ANN may be converted to the SNN resulting in the SNN of the present disclosure which may thereby reflect a neural network having been successfully trained based on larger-scale data sets, e.g., as in the training of the ANN, and therefore the SNN of the present disclosure has improved accuracy in its results and/or outputs when compared to the typical SNNs. For example, through such conversion, the SNN may implement the trained objective of the ANN [0050]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Ruckauer for the benefit of a neural network conversion operation, in which an activation of a neuron of an ANN is matched to a reciprocal of information about a timing at which a first spike is to be generated by a neuron of an SNN, thus advantageously reducing the number of computations performed by the SNN (Ruckauer [0061])
Regarding claim 17, Modified Smith teaches the method of claim 2, Modified Smith does not explicitly teach wherein the ANN is a convolutional neural network (CNN).
Ruckauer teaches wherein the ANN is a convolutional neural network (CNN) (A typical analog neural network (ANN) is a deep neural network including a plurality of hidden layers, and includes, for example, a convolutional neural network (CNN) [0044]).
The same motivation to combine dependent claim 9 applies here.
Regarding claim 21, Modified Smith teaches the system of claim 20, Ruckauer teaches wherein the at least one processor comprises a neuromorphic processor (the processor 620 may be a specialized computer, or may be representative of one or more processors to control a specialized SNN processor according to the conversion of the ANN. In an example, the specialized processor may be a neuromorphic chip or processor [0105]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Ruckauer for the benefit of a neural network conversion operation, in which an activation of a neuron of an ANN is matched to a reciprocal of information about a timing at which a first spike is to be generated by a neuron of an SNN, thus advantageously reducing the number of computations performed by the SNN (Ruckauer [0061])
11. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) in view of Ruckauer et al. (US20190122110) and further in view of Rouhani et al. (US20210019605 filed 03/21/2019)
Regarding claim 10, Modified Smith teaches the method of claim 9, Modified Smith does not explicitly teach wherein the ANN was trained using a loss function LANN and the ANN was refined using a penalized loss function LIANN that included LANN and one or more penalized terms.
Rouhani teaches wherein the ANN was trained using a loss function LANN (Equation (2) below expresses a first loss term loss1 that accounts for the constraint to maximize the isolation between activations or clusters of outputs from the activation functions applied by the neuron in the hidden layer l. … wherein λ1 may denote a tradeoff hyper-parameter specifying the contribution of the first loss term loss1 during the training of the machine learning model 100 [0061]) and
the ANN was refined using a penalized loss function LIANN that included LANN and one or more penalized terms (This first loss term loss1 may be configured to penalize an activation distribution in which different activations (e.g., outputs from the activation functions applied by the neuron in the hidden layer l) are entangled and difficult to separate [0061]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Rouhani for the benefit of minimizing an error in an output of the first machine learning model by at least minimizing a loss function (Rouhani [0019])
12. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) and further in view of Bazhenov et al. (US20220374679 filed 07/17/2020)
Regarding claim 12, Modified Smith teaches the method of claim 2, Modified Smith does not explicitly teach further comprising: refining the trained SNN using a loss function LSNN.
Bazhenov teaches further comprising: refining the trained SNN using a loss function LSNN ( … a loss function (termed elastic weight consolidation—EWC), which penalizes updates to weights deemed appropriate for previous tasks, made use of synaptic mechanisms of memory consolidation [0040]; However, it should be appreciated that any other plasticity rules can be applied to SNN during sleep phase and any other modifications can be applied to the network to simulate sleep phase [0041]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Bazhenov for the benefit of convert the architecture of the ANN to an equivalent Spiking Neural Network (SNN) and simulate a sleep phase in the SNN while using plasticity rules to modify synaptic weights (Bazhenov abstract)
13. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) in view of Bazhenov et al. (US20220374679 filed 07/17/2020) and further in view of Ruckauer et al. (US20190122110)
Regarding claim 13, Modified Smith teaches the method of claim 12, Modified Smith does not explicitly teach wherein the loss function LSNN includes an accuracy term, a latency term, and a power consumption term.
Ruckauer teaches wherein the loss function LSNN includes an accuracy term, a latency term, and a power consumption term (n an example, by implementing the neural network conversion operation of the present disclosure, the implemented SNN may maintain an accuracy loss less than 1% [0062]; … and a latency until a calculated output is available is reduced [0057]; a spiking neural network (SNN) of the present disclosure employing “all-or-none pulses” to transfer information may be used, which may require less power consumption than a corresponding ANN implementation [0046]).
The same motivation to combine dependent claim 9 applies here.
14. Claims 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) in view of Bazhenov et al. (US20220374679 filed 07/17/2020) and further in view of Numaoka et al. (US20220366723 filed 07/14/2020)
Regarding claim 15, Modified Smith teaches the method of claim 12, Smith teaches wherein refining the SNN further comprises: applying, to each of the plurality of neurons, a scaling factor ŋj, wherein H is a set of scaling factors for the plurality of neurons (As specified above, a neuron takes as input normalized volleys containing spikes having times between 0 and Tmax. The neuron produces a single output spike which can take on a range of values that depends on a number of factors to be described later. Tmax and Wmax are related via the scale factor γ=Tmax/Wmax. That is, γ is the ratio of the maximum range of input values to the maximum range of spike times [0164]);
providing first labeled training data to the trained SNN (Using labels associated with the training data, the N output neurons are trained in a supervised manner to spike for the class indicated by the label) [0150]);
receiving first output from the trained SNN for the first labeled training data (At layer 2, overlapping regions from layer 1 are processed. For example a 2×2 set of RF classifier outputs, represented as four 10 line bundles, are merged to form a single 40 line volley. This volley forms the input to a layer 2 classifier [0218]);
Modified Smith does not explicitly teach calculating a first loss based on the first labeled training data and the first output from the trained SNN using the loss function LSNN; adjusting values of the scaling factors in H based on the loss: applying the adjusted scaling factors to the plurality of neurons of the trained SNN; providing second labeled training data to the trained SNN; receiving second output from the trained SNN for the second labeled training data; and calculating a second loss based on the second labeled training data and the second output from the trained SNN using the loss function LSNN.
Numaoka teaches calculating a first loss based on the first labeled training data and the first output from the trained SNN using the loss function LSNN (A loss function 404 is a function defined using the emotion output and the emotion label as arguments [0107]; The emotion learning processing logic 304 includes an artificial intelligence using a learning model such as …, a spiking neural network (SNN) [0094]);
adjusting values of the scaling factors in H based on the loss: applying the adjusted scaling factors to the plurality of neurons of the trained SNN (Then, learning or training of the neural network is performed so as to minimize the loss function 404 by modifying the coupling weighting coefficient between neurons from the output layer toward the input layer of the full coupling layer 403 using a method such as back propagation [0107]; The emotion learning processing logic 304 includes an artificial intelligence using a learning model such as …, a spiking neural network (SNN) [0094]);
providing second labeled training data to the trained SNN (The emotion learning processing logic 304 inputs the data preprocessed by the learning data preprocessing logic 301 [0125]);
receiving second output from the trained SNN for the second labeled training data (The output layer of the full coupling layer 403 is a node for emotion output [0106]); and
calculating a second loss based on the second labeled training data and the second output from the trained SNN using the loss function LSNN (The emotion learning processing logic 304 includes an artificial intelligence using a learning model such as …, a spiking neural network (SNN) … It is assumed that the artificial intelligence used in the emotion learning processing logic 304 according to the present embodiment includes a mechanism for learning a result of calculation by a loss function … or the like to estimate an optimal solution (output) for a question (input) [0094]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Numaoka for the benefit of learning or training of the neural network is performed so as to minimize the loss function (Numaoka [0107])
Regarding claim 16, Modified Smith teaches the method of claim 12, Smith teaches wherein refining the SNN further comprises: setting an initialization value V0 for each of the plurality of neurons, wherein I includes a set of initialization values (Before each input volley, it is assumed that the value of the neuron's body potential is quiescent (initialized at 0) [0098]);
providing first labeled training data to the trained SNN (Using labels associated with the training data, the N output neurons are trained in a supervised manner to spike for the class indicated by the label) [0150]);
receiving first output from the trained SNN for the first labeled training data (At layer 2, overlapping regions from layer 1 are processed. For example a 2×2 set of RF classifier outputs, represented as four 10 line bundles, are merged to form a single 40 line volley. This volley forms the input to a layer 2 classifier [0218]);
Modified Smith does not explicitly teach calculating a first loss based on the first labeled training data and the first output from the trained SNN using the loss function LSNN; adjusting values of the initialization values in I based on the loss; applying the adjusted initialization values to the plurality of neurons of the trained SNN; providing second labeled training data to the trained SNN; receiving second output from the trained SNN for the second labeled training data; and calculating a second loss based on the second labeled training data and the second output from the trained SNN using the loss function LSNN.
Numaoka teaches calculating a first loss based on the first labeled training data and the first output from the trained SNN using the loss function LSNN (A loss function 404 is a function defined using the emotion output and the emotion label as arguments [0107]; The emotion learning processing logic 304 includes an artificial intelligence using a learning model such as …, a spiking neural network (SNN) [0094]);
adjusting values of the initialization values in I based on the loss; applying the adjusted initialization values to the plurality of neurons of the trained SNN (Then, learning or training of the neural network is performed so as to minimize the loss function 404 by modifying the coupling weighting coefficient between neurons from the output layer toward the input layer of the full coupling layer 403 using a method such as back propagation [0107]; The emotion learning processing logic 304 includes an artificial intelligence using a learning model such as …, a spiking neural network (SNN) [0094]);
providing second labeled training data to the trained SNN (The emotion learning processing logic 304 inputs the data preprocessed by the learning data preprocessing logic 301 [0125]);
receiving second output from the trained SNN for the second labeled training data (The output layer of the full coupling layer 403 is a node for emotion output [0106]); and
calculating a second loss based on the second labeled training data and the second output from the trained SNN using the loss function LSNN (The emotion learning processing logic 304 includes an artificial intelligence using a learning model such as …, a spiking neural network (SNN) … It is assumed that the artificial intelligence used in the emotion learning processing logic 304 according to the present embodiment includes a mechanism for learning a result of calculation by a loss function … or the like to estimate an optimal solution (output) for a question (input) [0094])
The same motivation to combine dependent claim 15 applies here.
15. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) in view of Ruckauer et al. (US20190122110) and further in view of Chen et al. (US20210011161 filed 07/09/2019)
Regarding claim 22, Modified Smith teaches the system of claim 21, Smith teaches wherein the data comprises image data (For example, the pattern may be a raw image expressed as pixels or it may be a pre-processed image [0066]), the system further comprising:
Modified Smith does not explicitly teach an image data source in communication with the at least one processor, the image data source comprising an array of single-photon avalanche photodiodes (SPADs); and wherein the at least one processor that is further configured to: receive the image data from the image data source.
Chen teaches an image data source in communication with the at least one processor (Vehicle 100 may also include computer system 112 to perform operations, such as operations described therein. As such, computer system 112 may include at least one processor 113 (which could include at least one microprocessor) operable to execute instructions 115 stored in a non-transitory, computer-readable medium, such as data storage 114 [0057]),
the image data source comprising an array of single-photon avalanche photodiodes (SPADs) (In some embodiments, the one or more detectors of the laser rangefinder/LIDAR 128 may include one or more photodetectors. ... In some examples, such photodetectors may even be capable of detecting single photons (e.g., single-photon avalanche diodes (SPADs)). Further, such photodetectors can be arranged (e.g., through an electrical connection in series) into an array [0046]); and
wherein the at least one processor that is further configured to: receive the image data from the image data source (Each of these sensors may communicate environment data to a processor in the vehicle about information each respective sensor receives [0086]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Chen for the benefit of a LIDAR device which may determine the distances by projecting light pulses onto the environment and detecting corresponding return light pulses reflected from the various points within the environment, the intensity of each of the return light pulses may be measured by the LIDAR device and represented as a waveform that indicates the intensity of detected light over time (Chen [0024]).
16. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) in view of Deng et al. ("Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation." IEEE Journal of Solid-State Circuits 55.8 (2020): 2228-2246).
Regarding claim 23, Modified Smith teaches the system of claim 20, Modified Smith does not explicitly teach further comprising: a communications connection to an image data source, configured to communicate a stream of image data from the image data source to the processor; and wherein time step τ is measured from a commencement of computation of the SNN for the stream of image data, and predetermined based upon a refinement of the SNN.
Deng teaches further comprising: a communications connection to an image data source (The SNN recognizes the voice commands from human, the CNN receives resized images from the camera and detects the initial location, pg. 2242, left col., second para.)
configured to communicate a stream of image data from the image data source to the processor (and a neural state machine (NSM) for decision making, as illustrated in Fig. 25, pg. 2242, left col., second para. The Examiner notes the neural state machine as the processor); and
wherein time step τ is measured from a commencement of computation of the SNN for the stream of image data (There are two levels of temporal execution unit in our design: time step and time phase that work synergistically to support a flexible operating pattern …each time step has multiple time phases … For example, by configuring the timing registers, i.e., start_phase, end_phase, #on_phases, and #off_phase, pg. 2235, right col., second para.), and
predetermined based upon a refinement of the SNN (In SNN mode with 1 < Tw ≤ 16, A/S_MEM0 and A/S_MEM1 are merged to be a whole chunk to contain the spike pattern within a historical temporal window (i.e., Tw time phases), which is updated at each time phase, pg 2234, left col., last para.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Deng for the benefit of object tracking in a video stream (Deng, left col., first para.)
17. Claims 24 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al (US20170286828) in view of Dong et al. ("Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network." PloS one 13.11 (2018): e0204596) in view of Deng et al. ("Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation." IEEE Journal of Solid-State Circuits 55.8 (2020): 2228-2246) and further in view of Tang et al. ("Spike counts based low complexity SNN architecture with binary synapse." IEEE Transactions on Biomedical Circuits and Systems 13.6 (2019): 1664-1677).
Regarding claim 24, Modified Smith teaches the system of claim 23, Modified Smith does not explicitly teach wherein the trained SNN is structured so as not to generate the output until time step τ regardless of intervening neuron activations.
Tang teaches wherein the trained SNN is structured so as not to generate the output until time step τ regardless of intervening neuron activations (When the membrane voltage exceeds the predefined membrane threshold Vth, an output spike is generated from the neuron, and its membrane voltage is reset. The excitatory layer neurons are fully-connected with the inhibitory neurons. Through the connections, the lateral inhibition occurs that makes competitions among the excitatory neurons. The winner-take-all (WTA) mechanism that means that once a winner is chosen, other neurons are prohibited from generating output spikes (pg. 1665, left col., last para. to right col., first para.). The Examiner notes spikes represents neuron activations according to instant specification).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Tang for the benefit of a SNN processor in learning mode, processes 79 input spikes per each time step in 3,024 clock cycles on average and consumes average 1.42 pJ/SOP with a throughput of 4.0 GSOP/s which is one of the lowest energy/ classification and energy/SOP (Tang, pg. 1675, left col., last para.)
Regarding claim 25, Modified Smith teaches the system of claim 23, Modified Smith does not explicitly teach wherein the trained SNN is structured so as to generate the output based upon cumulative activations of the neurons in the output layer from the commencement of computation until the time step τ.
Tang teaches wherein the trained SNN is structured so as to generate the output based upon cumulative activations of the neurons in the output layer from the commencement of computation until the time step τ (In the proposed accumulation based computing scheme, as the accumulated number of input spikes are used per one membrane voltage updates, multiple excitatory neurons generate output spikes at a time (pg. 1669, right col., second to the last para.). The Examiner notes spikes represents neuron activations according to instant specification).
The same motivation to combine dependent claim 24 applies here.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.G./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148