DETAILED ACTION
This action is in response to the filing on 11/24/2025. Claims 1-20, are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7-9, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over RAN, Y. et al. (Text classification algorithm based on sparse distributed representation, first cited in the IDS filed 02/27/2023) hereinafter Ran, in view of Boudreau, Luke G. (Contractive Autoencoding for Hierarchical Temporal Memory and Sparse Distributed Representation Binding), hereinafter Luke, and further in view of Fishel et al. (US 2019/0340490 A1), hereinafter Fishel, and further in view of YOO (US 2022/0284273 A1), hereinafter Yoo.
Regarding claim 1, Ran teaches a method, the method comprising (Ran discloses a method for classification [see Ran, pg. 2, FIG. 1]) which would have been obvious to one of ordinary skill in the art before the effective filing date to perform by a general purpose computer to incorporate a method performed by a computational system, the method comprising:
receiving data as input and producing, based on the data, a first Sparse Distributed Representation (SDR) as output (Ran discloses receiving a text document as input and converting the document to its SDR [see Ran, pg. 2, FIG. 1; Section III, Subsection A, para. 1]);
receiving the first SDR as input and producing, based on the first SDR, a second Sparse Distributed Representation (SDR) as output (Ran discloses receiving the first SDR, the doc-SDR, as input and producing a class-SDR [see Ran, pg. 2, FIG. 1; Section III, Subsection B, para. 2]);
receiving the second SDR as input and producing, based on the second SDR, a signature for a class of which the data is a part (Ran discloses receiving the second SDR, the class-SDR, as input, and producing a classification based on the similarity between the first SDR, the doc-SDR, and the second SDR, the class-SDR [see Ran, pg. 2, FIG. 1; Section III, Subsection C]).
However, Ran fails to teach a computational system comprising (i) an encoder, (ii) a neural processing unit comprising a streaming-based reconfigurable instruction architecture and a plurality of artificial neurons, and (iii) a classifier; receiving, by the encoder, data as input and producing, based on the data, a first Sparse Distributed Representation (SDR) as output, wherein the plurality of artificial neurons has synaptic connections to offset locations in the first SDR; receiving, by the neural processing unit the first SDR as input and producing, based on the first SDR, a second Sparse Distributed Representation (SDR) as output, wherein the second SDR is produced using the streaming-based reconfigurable instruction architecture and the synaptic connections; and receiving, by the classifier.
In the same field of endeavor, Luke teaches:
a computational system comprising (i) an encoder, (ii) a neural processing unit comprising a plurality of artificial neurons, and (iii) a classifier (Luke discloses a computational system comprising an encoder, an HTM system, and a classifier [see Luke, pg. 35, Figure 2.9], that in the HTM system artificial neurons are referred to as HTM neurons or cells and that the HTM system comprises cells [see Luke, Section 2.1, para. 1 and Figure 2.1]);
receiving, by the encoder, data as input and producing, based on the data, a first Sparse Distributed Representation (SDR) as output (Luke discloses the encoder receiving data as input and outputting an SDR [see Luke, pg. 35, Figure 2.9]),
wherein the plurality of artificial neurons has synaptic connections to offset locations in the first SDR (Luke discloses the HTM system receiving a first SDR from the encoder [see Luke, Figure 2.9], and that the spatial pooler of the HTM system forms synaptic connections between cell columns and offset locations of the input [see Luke, Figure 2.2]).
receiving, by the neural processing unit, the first SDR as input and producing, based on the first SDR, a second Sparse Distributed Representation (SDR) as output (Luke discloses the HTM system receiving the first SDR as input and outputting a second SDR [see Luke, pg. 35, Figure 2.9]),
wherein the second SDR is produced using the streaming-based reconfigurable instruction architecture and the synaptic connections (Luke discloses that the second SDR is produced using the HTM system with the first SDR as input [seee Luke, Figure 2.9] that the HTM system uses the spatial pooler to learn the first SDR with synaptic connections between cell columns and offset locations of the first SDR [see Luke, Figure. 2.1] and how the second SDR is formed from the active and inactive cell columns of the spatial pooler [see Luke, Figure 2.6]);
receiving, by the classifier, the second SDR as input and producing, based on the second SDR, a class of which the data is a part (Luke discloses the classifier receiving the second SDR as input and producing a classification [see Luke, pg. 35, Figure 2.9]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate a computational system comprising (i) an encoder, (ii) a neural processing unit, and (iii) a classifier; receiving, by the encoder; receiving, by the neural processing unit; and receiving, by the classifier as suggested in Luke into Ran because both methods produce SDRs of input data to perform classification (see Ran, pg. 2, FIG. 1; see Luke, pg. 35, Figure 2.9). Incorporating the teaching of Luke into Ran would achieve the predictable result of performing classification on input data that has been processed into an SDR.
However, the combination of Ran and Luke fails to teach a neural processing unit comprising a streaming-based reconfigurable instruction architecture.
In the same field of endeavor, Fishel teaches:
a neural processing unit comprising a streaming-based architecture (Fishel discloses that the cpu and neural processor are connected by a bus [see Fishel, para. 32 and FIG. 2] and that the cpu sends task descriptors to the neural processor [see Fishel, para. 99]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate a neural processing unit comprising a streaming-based architecture as suggested in Fishel into the combination of Ran and Luke because Luke discloses the HTM learning system [see Luke, pg. 35, Figure 2.9], and Fishel discloses a neural processor which can incorporate any neural network architecture, including different types of layers or orders of layers, as a subcomponent [see Fishel, para. 83]. Thus it would be possible to incorporate the HTM network of Luke as the neural network of the neural processor disclosed by Fishel. It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate the teaching of Fishel into the combination of both methods are directed to artificial intelligence applications (see Ran, Abstract; see Fishel, Abstract). Incorporating the teaching of Fishel into the combination of Ran and Luke would perform operations in a fast and power-efficient manner while relieving CPU 208 of resource-intensive operations associated with neural network operations (see Fishel, para. 38).
However, the combination of Ran, Luke, and Fishel fails to teach a neural processing unit comprising a reconfigurable instruction architecture.
In the same field of endeavor, Yoo teaches:
a neural processing unit comprising a reconfigurable instruction architecture (Yoo discloses a neural processor [see Yoo, Abstract] implementing one or more hardware components including MISD architecture capable of responding to and executing instructions [see Yoo, para. 83]. Thus the MISD architecture is reconfigurable when the instructions executed are reconfigured).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate a neural processing unit comprising a reconfigurable instruction architecture as suggested in Yoo into the combination of Ran, Luke, and Fishel because both methods are directed to artificial intelligence applications (see Ran, Abstract; see Yoo, para. 5). Incorporating the teaching of Yoo into the combination of Ran, Luke, and Fishel would enhance an efficiency while reducing costs of the accumulators and the adders (see Yoo, para. 59).
Regarding claim 2, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein the classifier further receives a label that is indicative of the class (Luke discloses that the data goes to the encoder to generate the first SDR which is then used to make the second SDR and gets input to the classifier [see Luke, pg. 35, Figure 2.9], thus, because the data and SDR is labelled [see Luke, Subsection 4.2.3, para. 1] the label should follow the data from the encoder to the classifier).
Regarding claim 3, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 2 and further teaches:
associating, by the classifier, the signature with the label in a data structure, such that the signature is representative of the class (Ran discloses creating SDRs that are representative of each class [see Ran, Section III, Subsection B, para. 1; pg. 3, FIG. 2]. Ran further discloses that these SDRs are used as the multiple signatures for the class label matrix [see Section III, Subsection B, para. 5]. And Luke discloses that the data goes to the encoder to generate the first SDR which is then used to make the second SDR and gets input to the classifier [see Luke, pg. 35, Figure 2.9], thus, because the data and SDR is labelled [see Luke, Subsection 4.2.3, para. 1] the label should follow the data from the encoder to the classifier. Thus, in combination, the labels are associated with each class in the classification label feature vector matrix such that the signature is representative of the class).
Regarding claim 4, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 2 and further teaches:
wherein the label is included in the data received by the encoder (Luke discloses that the data goes to the encoder to generate the first SDR [see Luke, pg. 35, Figure 2.9], and that the each SDR has a class label [see Luke, Subsection 4.2.3, para. 1], thus, the data must be labelled before being received by the encoder).
Regarding claim 7, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
comparing, by the classifier, the signature against multiple signatures, each of which is representative of a different class or subclass; determining, by the classifier, that the signature matches one of the multiple signatures (Ran discloses comparing the first SDR, the doc-SDR, with multiple SDRs, the class-SDRs, each of which is representative of a different class [see Ran, Section III, Subsection C]. Ran further discloses computing the overlap values for all of the comparisons, and ranking the similarity such that the category label with the highest similarity is the predicted result for the document [see Ran, Section III, Subsection C]);
outputting, by the classifier, a prediction for the data based on the matching signature (Ran discloses returning the label as the prediction for the data based on the matching similarity [see Ran, pg. 2, FIG. 1]).
Regarding claim 8, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 7 and further teaches:
wherein each of the multiple signatures is indicative of a reference Sparse Distributed Representation (SDR) that is determined to be representative of the corresponding class or subclass (Ran discloses creating SDRs that are representative of each class [see Ran, Section III, Subsection B, para. 1; pg. 3, FIG. 2]. Ran further discloses that these SDRs are used as the multiple signatures for the class label matrix [see Section III, Subsection B, para. 5]).
Regarding claim 9, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 7 and further teaches:
wherein said comparing causes a value to be produced for each of the multiple signatures that is representative of amount of overlap with the signature, and wherein said determining comprises establishing the matching signature has a highest amount of overlap as indicated by a highest value (Ran discloses comparing the first SDR, the doc-SDR, with multiple SDRs, the class-SDRs, each of which is representative of a different class [see Ran, Section III, Subsection C]. Ran further discloses computing the overlap values for all of the comparisons, and ranking the similarity such that the category label with the highest similarity is the predicted result for the document [see Ran, Section III, Subsection C]).
Regarding claim 17, claim 17 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, it would have been obvious to one of ordinary skill in the art before the effective filing date to the combination of Ran, Luke, Fishel, and Yoo on a non-transitory medium with instructions to cause a computational system to perform the method to incorporate a non-transitory medium with instructions stored thereon that, when executed by a processing unit of a computational system, cause the computational system to perform operations comprising.
Regarding claim 18, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 17 and further teaches:
wherein the data is accompanied by a label that is indicative of the class, and wherein the operations further comprise (Luke discloses that the data goes to the encoder to generate the first SDR which is then used to make the second SDR and gets input to the classifier [see Luke, pg. 35, Figure 2.9], thus, because the data and SDR is labelled [see Luke, Subsection 4.2.3, para. 1] the label should follow the data from the encoder to the classifier):
associating the signature with the label in a data structure, such that the signature is representative of the class (Ran discloses creating SDRs that are representative of each class [see Ran, Section III, Subsection B, para. 1; pg. 3, FIG. 2]. Ran further discloses that these SDRs are used as the multiple signatures for the class label matrix [see Section III, Subsection B, para. 5]. And Luke discloses that the data goes to the encoder to generate the first SDR which is then used to make the second SDR and gets input to the classifier [see Luke, pg. 35, Figure 2.9], thus, because the data and SDR is labelled [see Luke, Subsection 4.2.3, para. 1] the label should follow the data from the encoder to the classifier. Thus, in combination, the labels are associated with each class in the classification label feature vector matrix such that the signature is representative of the class).
Regarding claim 19, claim 19 contains substantially similar limitations to those found in claim 7 above. Consequently, claim 19 is rejected for the same reasons.
Regarding claim 20, claim 20 contains substantially similar limitations to those found in claim 8 above. Consequently, claim 20 is rejected for the same reasons.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over RAN, Y. et al. (Text classification algorithm based on sparse distributed representation, first cited in the IDS filed 02/27/2023) hereinafter Ran, in view of Boudreau, Luke G. (Contractive Autoencoding for Hierarchical Temporal Memory and Sparse Distributed Representation Binding), hereinafter Luke, and further in view of Fishel et al. (US 2019/0340490 A1), hereinafter Fishel, and further in view of YOO (US 2022/0284273 A1), hereinafter Yoo, as applied in claim 1 above, and further in view of Scott Purdy (Encoding Data for HTM Systems, first cited in the IDS filed 02/27/2023), hereinafter Purdy.
Regarding claim 5, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein to produce the first SDR, the encoder converts the data to a sparse format (Data must be converted to a binary representation of pred in order to be used for input to the Spatial Pooler. In cases where data is not binary an encoder must be used to convert the data to an SDR. [see Luke, Section 2.1.3, para. 1]).
However, the combination of Ran, Luke, Fishel, and Yoo fails to teach wherein to produce the SDR, the encoder converts the data from a vector representation to a sparse hyperdimensional format.
In the same field of endeavor, Purdy teaches:
wherein to produce the SDR, the encoder converts the data from a vector representation to a sparse hyperdimensional format (Purdy discloses a plurality of encoders for representing data in SDRs [see Purdy, Abstract], one such category of encoders being encoders for geospatial data which would be in the form of a vector, Purdy discloses a plurality of encoders for handling this category of geospatial data [see Purdy, Section 5]. Purdy further discloses that encoders can be combined for data requiring multiple values, thus a vector of data could encode each value in the vector and then combine them [see Purdy, Section 8]. Thus, the data to the encoder can be represented in a vector representation and then converted to a sparse hyperdimensional format.).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein to produce the SDR, the encoder converts the data from a vector representation to a sparse hyperdimensional format as suggested in Purdy into the combination of Ran, Luke, Fishel, and Yoo because Luke references Purdy for a plurality of types of encoders which can be used for non-binary data to be converted into an SDR [see Luke, Section 2.1.3, para. 1] and Purdy discloses a plurality of encoders for representing data in SDRs [see Purdy, Abstract]. Incorporating the teaching of Purdy into the combination of Ran, Luke, Fishel, and Yoo would provide a plurality of encoders for converting into an SDR for a plurality of data types [see Purdy, Section 3, 4, 5, and 6].
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over RAN, Y. et al. (Text classification algorithm based on sparse distributed representation, first cited in the IDS filed 02/27/2023) hereinafter Ran, in view of Boudreau, Luke G. (Contractive Autoencoding for Hierarchical Temporal Memory and Sparse Distributed Representation Binding), hereinafter Luke, and further in view of Fishel et al. (US 2019/0340490 A1), hereinafter Fishel, and further in view of YOO (US 2022/0284273 A1), hereinafter Yoo, as applied in claim 1 above, and further in view of F. D. S. Webber (Semantic folding theory and its application in semantic fingerprinting), hereinafter Webber.
Regarding claim 6, the combination of Ran, Luke, Fishel, and Yoo as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
the first and second SDRs (Ran discloses receiving a text document as input and converting the document to its SDR [see Ran, pg. 2, FIG. 1; Section III, Subsection A, para. 1], and receiving the first SDR, the doc-SDR, as input and producing a class-SDR [see Ran, pg. 2, FIG. 1; Section III, Subsection B, para. 2]).
However, the combination of Ran, Luke, Fishel, and Yoo fails to teach wherein the first and second SDRs are representative of unordered collections of set bits.
In the same field of endeavor, Webber teaches:
wherein the SDR is representative of an unordered collection of set bits (Webber illustrates the difference between both the ordered dense representation [see Webber, pg. 13, Fig. 1], and their unordered sparse representation [see Webber, pg. 14, Fig. 2]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the SDR is representative of an unordered collection of set bits as suggested in Webber into the combination of Ran, Luke, Fishel, and Yoo to incorporate wherein the first and second SDRs are representative of unordered collections of set bits because Ran references the semantic folding theory of Webber for encoding into SDR [see Ran, Section I, para. 4], Ran further discloses using the Retina API for generating the SDRs [see Ran, Section III, Subsection A, para. 1], and Webber discloses their semantic folding is the Retina API product [see Webber, pg. 30, para. 1]. Thus, the first and second SDRs used in Ran are also representative of unordered collections of set bits according to the Retina API disclosed by Webber.
Claims 10 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over RAN, Y. et al. (Text classification algorithm based on sparse distributed representation, first cited in the IDS filed 02/27/2023) hereinafter Ran, in view of Boudreau, Luke G. (Contractive Autoencoding for Hierarchical Temporal Memory and Sparse Distributed Representation Binding), hereinafter Luke as applied in claim 1 above, and further in view of F. D. S. Webber (Semantic folding theory and its application in semantic fingerprinting), hereinafter Webber.
Regarding claim 10, Ran teaches a method comprising (Ran discloses a method for classification [see Ran, pg. 2, FIG. 1]) which would have been obvious to one of ordinary skill in the art before the effective filing date to perform by a general purpose computer to incorporate a computational system comprising:
receive data as input and produce, based on the data, a first series of Sparse Distributed Representations (SDRs) as output (Ran discloses receiving text documents as input and converting the documents to their SDRs [see Ran, pg. 2, FIG. 1; Section III, Subsection A, para. 1]);
receive the first series of SDRs as input and produce, based on the first series of SDRs, a second series of Sparse Distributed Representations (SDRs) as output (Ran discloses receiving the first series of SDRs, the doc-SDRs, as input and producing the second series of SDRS, the class-SDRs, as output [see Ran, pg. 2, FIG. 1; Section III, Subsection B, para. 2]);
receive the second series of SDRs as input and produce, based on the second series of SDRs, a series of signatures (Ran discloses receiving the second SDRs, the class-SDRs, as input, and producing a classification based on the similarity between the first SDRs, the doc-SDRs, and the second SDRs, the class-SDRs [see Ran, pg. 2, FIG. 1; Section III, Subsection C]);
wherein each signature in the series of signatures is associated with (i) a first SDR in the first series of SDRs, (ii) a second SDR in the second series of SDRs, and (iii) a portion of the data (Ran discloses receiving the second SDRs, the class-SDRs, as input, and producing a classification based on the similarity between the first SDRs, the doc-SDRs, and the second SDRs, the class-SDRs [see Ran, pg. 2, FIG. 1; Section III, Subsection C]. Thus, the class signature is associated with at least a first SDR, a doc-SDR, the second SDR, the class-SDR, and a portion of the data, the corresponding doc-SDR instead of all doc-SDRs);
wherein each signature conveys information regarding a corresponding object, represented by the portion of the data, based on the second SDR (Ran discloses receiving the first series of SDRs, the doc-SDRs, as input and producing the second series of SDRS, the class-SDRs, as output [see Ran, pg. 2, FIG. 1; Section III, Subsection B, para. 2], and receiving the second SDRs, the class-SDRs, as input, and producing a classification based on the similarity between the first SDRs, the doc-SDRs, and the second SDRs, the class-SDRs [see Ran, pg. 2, FIG. 1; Section III, Subsection C]. Thus, the signatures for each class represent a portion of data based on the second SDR).
However, Ran fails to teach an encoder configured to receive; a neural processing unit configured to receive, the first series of SDRs as input and produce, based on the first series of SDRs, a second series of Sparse Distributed Representations (SDRs) as output, wherein the neural processing unit comprises a plurality of artificial neurons having synaptic connections to offset locations in the first series of SDRS, and wherein the second series of SDRs is produced using the synaptic connections; a classifier configured to receive; and wherein each signature conveys information regarding a corresponding object, based on locations of nonzero bits in the SDR.
In the same field of endeavor, Luke teaches:
an encoder configured to receive data as input and produce, based on the data, a first Sparse Distributed Representation (SDR) as output (Luke discloses the encoder receiving data as input and outputting an SDR [see Luke, pg. 35, Figure 2.9]);
a neural processing unit configured to receive the first SDR as input and producing, based on the first SDR, a second Sparse Distributed Representation (SDR) as output, wherein the neural processing unit comprises a plurality of artificial neurons having synaptic connections to offset locations in the first SDR, and wherein the second SDR is produced using the synaptic connections (Luke discloses that the second SDR is produced using the HTM system with the first SDR as input [seee Luke, Figure 2.9] that the HTM system uses the spatial pooler to learn the first SDR with synaptic connections between cell columsn and offset locations of the first SDR [see Luke, Figure. 2.1] and how the second SDR is formed from the active and inactive cell columns of the spatial pooler [see Luke, Figure 2.6]);
a classifier configured to receive the second SDR as input and producing, based on the second SDR, a class of which the data is a part (Luke discloses the classifier receiving the second SDR as input and producing a classification [see Luke, pg. 35, Figure 2.9]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate an encoder configured to receive; a neural processing unit configured to receive the first SDR as input and producing, based on the first SDR, a second Sparse Distributed Representation (SDR) as output, wherein the neural processing unit comprises a plurality of artificial neurons having synaptic connections to offset locations in the first SDR, and wherein the second SDR is produced using the synaptic connections to teach a neural processing unit configured to receive the first SDR as input and producing, based on the first SDR, a second Sparse Distributed Representation (SDR) as output, wherein the second series of SDRs is produced using the synaptic connections and a classifier configured to receive as suggested in Luke into Ran because both methods produce SDRs of input data to perform classification (see Ran, pg. 2, FIG. 1; see Luke, pg. 35, Figure 2.9). Incorporating the teaching of Luke into Ran would achieve the predictable result of performing classification on input data that has been processed into an SDR.
However, the combination of Ran and Luke fails to teach wherein each signature conveys information regarding a corresponding object, based on locations of nonzero bits in the SDR.
In the same field of endeavor, Webber teaches:
wherein each signature conveys information regarding a corresponding object, based on locations of nonzero bits in the SDR (Webber illustrates their sparse representation [see Webber, pg. 14, Fig. 2] which conveys information based on the locations of the nonzero bits set [see Webber, pg. 14, para. 2]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein each signature conveys information regarding a corresponding object, based on locations of nonzero bits in the SDR as suggested in Webber into the combination of Ran and Luke because Ran references the semantic folding theory of Webber for encoding into SDR [see Ran, Section I, para. 4], Ran further discloses using the Retina API for generating the SDRs [see Ran, Section III, Subsection A, para. 1], and Webber discloses their semantic folding is the Retina API product [see Webber, pg. 30, para. 1].
Regarding claim 14, the combination of Ran and Luke as applied in claim 10 above teaches all the limitations of claim 10 and further teaches:
wherein the encoder represents each SDR (Luke discloses the encoder receiving data as input and outputting an SDR representing it [see Luke, pg. 35, Figure 2.9]) in the first series of SDRs as an ordered index, indicating set bits in that SDR (Ran discloses receiving text documents as input and converting the documents to their SDRs [see Ran, pg. 2, FIG. 1; Section III, Subsection A, para. 1]. Ran references the semantic folding theory of Webber for encoding into SDR [see Ran, Section I, para. 4], and discloses using the Retina API for generating the SDRs [see Ran, Section III, Subsection A, para. 1]. Webber discloses their semantic folding is the Retina API product [see Webber, pg. 30, para. 1]. Webber further illustrates an SDR for the word 'apple' that is represented as an ordered list of indexes indicating the set bits in the SDR, in other words, each value in "positions" represents the index of a set bit in the SDR and they are ordered from least to greatest [see pg. 34, para. 1; pg. 34, Fig. 10]);
wherein the neural processing unit represents each SDR (Luke discloses the HTM system receiving the first SDR and outputting a second SDR representing it [see Luke, pg. 35, Figure 2.9]) in the second series of SDRs as an ordered index, indicating set bits in that SDR (Ran discloses receiving the first series of SDRs, the doc-SDRs, as input and producing the second series of SDRS, the class-SDRs, as output [see Ran, pg. 2, FIG. 1; Section III, Subsection B, para. 2]. Ran references the semantic folding theory of Webber for encoding into SDR [see Ran, Section I, para. 4], and discloses using the Retina API for generating the SDRs [see Ran, Section III, Subsection A, para. 1]. Webber discloses their semantic folding is the Retina API product [see Webber, pg. 30, para. 1]. Webber further illustrates an SDR for the word 'apple' that is represented as an ordered list of indexes indicating the set bits in the SDR, in other words, each value in "positions" represents the index of a set bit in the SDR and they are ordered from least to greatest [see pg. 34, para. 1; pg. 34, Fig. 10]).
Regarding claim 15, the combination of Ran and Luke as applied in claim 10 above teaches all the limitations of claim 10 and further teaches:
wherein each SDR in the second series of SDRs is representative of a data structure in which bit are set to independently convey semantic meaning (Ran discloses the second series of SDRs, the class-SDRs, are generated based on the first SDRs, the doc-SDRs [see Ran, pg. 2, Fig. 1]. Ran references the semantic folding theory of Webber for encoding into SDR [see Ran, Section I, para. 4], and discloses using the Retina API for generating the SDRs [see Ran, Section III, Subsection A, para. 1]. Webber discloses their semantic folding is the Retina API product [see Webber, pg. 30, para. 1]. Webber illustrates an example of their sparse representation data structure where each bit independently conveys a specific meaning [see Webber, pg. 14, Fig. 2]. Thus, the SDRs used in Ran are also representative of a data structure which each bit are set to convey semantic meaning).
Regarding claim 16, the combination of Ran and Luke as applied in claim 10 above teaches all the limitations of claim 10 and further teaches:
wherein overlap between a pair of SDRs in the second series of SDRs is indicative of similarity between the pair of SDRs (The SDR overlap metric is used to decide the category of a classified document by calculating the similarity between the document and every class-SDR. [see Ran, Section I, para. 5]).
Claims 11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over RAN, Y. et al. (Text classification algorithm based on sparse distributed representation, first cited in the IDS filed 02/27/2023) hereinafter Ran, in view of Boudreau, Luke G. (Contractive Autoencoding for Hierarchical Temporal Memory and Sparse Distributed Representation Binding), hereinafter Luke, and further in view of F. D. S. Webber (Semantic folding theory and its application in semantic fingerprinting), hereinafter Webber, as applied in claim 10 above, and further in view of Scott Purdy (Encoding Data for HTM Systems, first cited in the IDS filed 02/27/2023), hereinafter Purdy.
Regarding claim 11, the combination of Ran, Luke, and Webber as applied in claim 10 above teaches all the limitations of claim 10.
However, the combination of Ran, Luke, and Webber fails to teach wherein the data received by the encoder is in the form of a vector with an ordered set of values for the features.
In the same field of endeavor, Purdy teaches:
wherein the data received by the encoder is in the form of a vector with an ordered set of values for the features (Purdy discloses a plurality of encoders for representing data in SDRs [see Purdy, Abstract], one such category of encoders being encoders for geospatial data which would be in the form of a vector, Purdy discloses a plurality of encoders for handling this category of geospatial data [see Purdy, Section 5], including one encoder which gives a strict ordering [see pg. 9, para. 2]. Purdy further discloses that encoders can be combined for data requiring multiple values, thus a vector of data could encode each value in the vector and then combine them [see Purdy, Section 8]. Thus, the data to the encoder can be represented in an ordered set of values, such as the geospatial data or any other ordered value data).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the data received by the encoder is in the form of a vector with an ordered set of values for the features as suggested in Purdy into the combination of Ran, Luke, and Webber because Luke references Purdy for a plurality of types of encoders which can be used for non-binary data to be converted into an SDR [see Luke, Section 2.1.3, para. 1] and Purdy discloses a plurality of encoders for representing data in SDRs [see Purdy, Abstract]. Incorporating the teaching of Purdy into the combination of Ran and Luke would provide a plurality of encoders for converting into an SDR for a plurality of data types [see Purdy, Section 3, 4, 5, and 6].
Regarding claim 13, the combination of Ran, Luke, and Webber as applied in claim 10 above teaches all the limitations of claim 10.
However, the combination of Ran, Luke, and Webber fails to teach wherein the encoder is able to mimic a linear encoder or a random distributed encoder based on a setting programmed in the computational system.
In the same field of endeavor, Purdy teaches:
wherein the encoder is able to mimic a linear encoder based on a setting programmed in the computational system (Purdy discloses a plurality of encoders for representing data in SDRs [see Purdy, Abstract], one such category of encoders being simple encoders for numbers [see Purdy, Section 3.1] which places all of the bits contiguously [see Purdy, pg. 3, Figures 1A, 3B, 3C]. Purdy further discloses that encoders can be combined for data requiring multiple values [see Purdy, Section 8]. Thus, multiple simple encoder for numbers could be used or the simple encoder for numbers could be used with other encoders depending on how the system is programmed).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the encoder is able to mimic a linear encoder based on a setting programmed in the computational system as suggested in Purdy into the combination of Ran, Luke, and Webber because Luke references Purdy for a plurality of types of encoders which can be used for non-binary data to be converted into an SDR [see Luke, Section 2.1.3, para. 1] and Purdy discloses a plurality of encoders for representing data in SDRs [see Purdy, Abstract]. Incorporating the teaching of Purdy into the combination of Ran and Luke would provide a plurality of encoders for converting into an SDR for a plurality of data types [see Purdy, Section 3, 4, 5, and 6].
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over RAN, Y. et al. (Text classification algorithm based on sparse distributed representation, first cited in the IDS filed 02/27/2023) hereinafter Ran, in view of Boudreau, Luke G. (Contractive Autoencoding for Hierarchical Temporal Memory and Sparse Distributed Representation Binding), hereinafter Luke, and further in view of F. D. S. Webber (Semantic folding theory and its application in semantic fingerprinting), hereinafter Webber, as applied in claim 10 above, and further in view of Fishel et al. (US 2019/0340490 A1), hereinafter Fishel, and further in view of YOO (US 2022/0284273 A1), hereinafter Yoo.
Regarding claim 12, the combination of Ran, Luke, and Webber as applied in claim 10 above teaches all the limitations of claim 10 and further teaches:
wherein the neural processing unit is a subcomponent (Luke discloses the HTM system as a subcomponent of the learning system [see Luke, pg. 35, Figure 2.9).
However, the combination of Ran, Luke, and Webber fails to teach wherein the neural processing unit is a subcomponent of a natural neural processor that has a reconfigurable Multiple Instruction Single Data (MISD) architecture.
In the same field of endeavor, Fishel teaches:
wherein the neural network is a subcomponent of a natural neural processor (Fishel discloses a neural processor which has a neural network as a subcomponent, and that other types of neural network architectures with different types of layers or order of layers can be used [see Fishel, para. 83]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the neural network is a subcomponent of a natural neural processor as suggested in Fishel into the combination of Ran, Luke, and Webber to incorporate wherein the neural processing unit is a subcomponent of a natural neural processor because Luke discloses the HTM learning system [see Luke, pg. 35, Figure 2.9], and Fishel discloses a neural processor which can incorporate any neural network architecture, including different types of layers or orders of layers, as a subcomponent [see Fishel, para. 83]. Thus it would be possible to incorporate the HTM network of Luke as the neural network subcomponent of the neural processor disclosed by Fishel. It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate the teaching of Fishel into the combination of both methods are directed to artificial intelligence applications (see Ran, Abstract; see Fishel, Abstract). Incorporating the teaching of Fishel into the combination of Ran, Luke, and Webber would perform operations in a fast and power-efficient manner while relieving CPU 208 of resource-intensive operations associated with neural network operations (see Fishel, para. 38).
However, the combination of Ran, Luke, Webber, and Fishel fails to teach a natural neural processor that has a reconfigurable Multiple Instruction Single Data (MISD) architecture.
In the same field of endeavor, Yoo teaches:
a natural neural processor that has a reconfigurable Multiple Instruction Single Data (MISD) architecture (Yoo discloses a neural processor [see Yoo, Abstract] implementing one or more hardware components including MISD architecture capable of responding to and executing instructions [see Yoo, para. 83]. Thus the MISD architecture is reconfigurable when the instructions executed are reconfigured).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate a natural neural processor that has a reconfigurable Multiple Instruction Single Data (MISD) architecture as suggested in Yoo into the combination of Ran, Luke, Webber, and Fishel because both methods are directed to artificial intelligence applications (see Ran, Abstract; see Yoo, para. 5). Incorporating the teaching of Yoo into the combination of Ran, Luke, Webber, and Fishel would enhance an efficiency while reducing costs of the accumulators and the adders (see Yoo, para. 59).
Response to Amendment
Applicant’s amendment to the specification is accepted and the objection is respectfully withdrawn.
Applicant’s amendment to claim 1 with regards to the priorly identified minor informality is accepted and the objection is respectfully withdrawn.
Applicant remarks on pg. 8 that claim 2 has been cancelled, however the amendments to the claims fails to cancel claim 2, as well as fail to amend claims 3-4 to either be cancelled or no longer depend on claim 2, thus, claim 2 has been considered pending for the purpose of examination.
Response to Arguments
Applicant’s arguments, filed 11/24/2025, traversing the rejection of claims 1-20 under 35 U.S.C. 103 have been fully considered and are not persuasive. With respect to claim 1, Applicant argues that none of Ran, Luke, or Fishel suggests a streaming-based reconfigurable instruction architecture, wherein the second SDR is produced using the streaming-based reconfigurable instruction architecture, that Luke fails to suggest Luke fails to suggest a plurality of artificial neurons having synaptic connections to offset locations in the first SDR, wherein the second SDR is produced using the synaptic connections. Examiner respectfully disagrees, and that Ran fails to disclose the signature recited by claim 1 and instead shows only a class label.
With respect to the architecture, the combination of Ran and Luke would suggest using an HTM system to produce the second SDR, further combined with Fishel would suggest using streaming tasks from the cpu to the npu over the bus with the npu supporting any neural network architecture such as the HTM network disclosed in Luke, which when further combined with Yoo would suggest the npu to use a reconfigurable instruction architecture such as MISD. Thus, claim 1’s recitation of the neural processing unit using a streaming-based reconfigurable instruction architecture would be obvious in view of Ren, Luke, Fishel, and Yoo, and using said architecture to produce the second SDR would be further obvious in view of the aforementioned prior art.
With respect to the synapses, Applicant argues that the synapses disclosed by Luke connect to a receptive field region instead of exact bit-offset locations of an SDR vector, wherein the subregion of input space of Luke is a conceptual region, not the offset locations in the first SDR recited by claim 1. Firstly, there is no reading of Luke that would suggest the proximal segments (i.e., the synapses) are conceptual regions. Instead Luke discloses that the proximal segments have connections to only a fraction of the input space and the organization of each column’s receptive field to the input space is known as topology [see Luke, pg. 9, para. 2]. Luke further provides example receptive fields in Figure 2.2 where 2 cell columns of the spatial pooler are shown with distinct receptive fields and corresponding proximal synapses, and explains on pg. vii that each proximal segment contains a set of six potential synapses. Further, there is no limitation recited that the offset locations are exact bit-offset locations, nor is there a definition in the spec that would define terminology to require this. Thus, the receptive field regions connecting the HTM cells (neurons) to offset locations of the input space read upon the claim language. Further, Applicant argues that the input space is a sensory input field and not offset bit locations of an SDR. However, as evident by Figure 2.9, the HTM system (including the spatial pooler) takes an SDR as input from the encoder. Thus, there is no suggestion in Luke that the input space is anything but an SDR, and the corresponding offset locations in the input space are offset locations in the input SDR. Further, there is no limitation recited in the claims, nor definition in the spec that would require the SDRs to be a vector, para. 42 of the specification suggests that bit vector can be leveraged by the system, but this is not a strict definition that would require an input SDR to be a bit vector. At best, the spec uses the terms iSDR and oSDR to refer to input and output SDRs that are sparse bit vectors in para. 93, however, this language is not reflected in the claims and thus is not applicable. Applicant argues that Luke fails to suggest the second SDR being produced using synaptic connections to offset locations of a first SDR, Examiner disagrees. Figure 2.2 as previously explained shows proximal segments which are synapses connecting HTM cells to offset locations of the input SDR (as evident by Figure 2.9), Figure 2.6 further shows how the spatial pooler takes the input from the first SDR and converts to a second SDR which is a binary vector representation of active and inactive cell columns given the input SDR. Thus, Luke reads upon the claim language of using synaptic connections to form the second SDR.
With respect to the signature, Applicant points to para. 98 of the specification and claims 2-4 as clarification to differentiate the recited signature and class labels. However, para. 98 does not define the signature to be different than a class label and instead explains how a signature can be created. Further, claims 2-4 do not require a signature to be different than a label, with claim 3 further reciting that the signature and the label are associated and are both indicative of the class. Thus, there is no recitation in the claims nor definition in the specification that would require the class signature to be different than a class label, thus the class-SDR disclosed by Ran is applicable.
For at least the aforementioned reasons, Claim 1 is obvious in view of Ran, Luke, Fishel, and Yoo, and the rejection under 35 U.S.C. 103 is respectfully maintained.
Applicant argues that claim 17 incorporates similar features to that of claim 1, thus, claim 17 is obvious in view of the prior art and the rejection under 35 U.S.C. 103 is respectfully maintained.
The rejection of claims 2-4, 7-9, and 18-20 under 35 U.S.C. 103 are also respectfully maintained.
Applicant argues that the rejection of claim 5 should be withdrawn because Purdy is not cited for, however, Purdy was cited as pertinent prior art by the applicant in the IDS filed 02/27/2023. As Applicant has not otherwise argued against the rejection of claim 5 under 35 U.S.C. 103, and independent claim 1 remains rejected under 35 U.S.C. 103, the rejection of claim 5 under 35 U.S.C 103 is respectfully maintained.
Applicant argues that the rejection of claims 6, 10, and 14-16 should be withdrawn as Webber is not cited for, however, Webber was cited in the PTO-892 form mailed out with the office action filed 09/24/2025. As Applicant has not otherwise argued against the rejection of claims 6, 10, and 14-16 under 35 U.S.C. 103 beyond similarity to independent claim 1, and independent claim 1 remains rejected under 35 U.S.C. 103, the rejection of claims 6, 10, and 14-16 under 35 U.S.C 103 are respectfully maintained.
Applicant argues the rejections of claims 11 and 13 should be withdrawn should be withdrawn as Webber and Purdy are not cited, as previously mentioned they are cited, thus the rejections under 35 U.S.C. 103 are respectfully maintained.
Applicant argues the rejections of claim 12 should be withdrawn as Webber, Fishel, and Yoo are not cited for, as discussed above Webber was cited for. However, Applicant is correct that Fishel and Yoo were not cited on the PTO-892 in error, but the full document code including Country Code-Number Code-Kind Code was included in the office action mailed 09/24/2025 and were discussed in the interview held on 11/10/2025. The PTO-892 form filed with this office action includes the citation for Fishel and Yoo to amend the previous error, and the rejection of claim 12 under 35 U.S.C. 103 is respectfully maintained.
For at least the aforementioned reasons, the rejection of claims 1-20 under 35 U.S.C. 103 are respectfully maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hawkins et al. (US 11,087,227 B2) teaches an encoder and spatial pooler for inputting a first SDR and outputting a second SDR.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAKE BREEN whose telephone number is (571)272-0456. The examiner can normally be reached Monday - Friday, 7:00 AM - 3:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.T.B./Examiner, Art Unit 2143
/JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143