Prosecution Insights
Last updated: April 19, 2026
Application No. 17/613,979

COMPUTER-IMPLEMENTED METHOD FOR CREATING ENCODED DATA

Non-Final OA §103
Filed
Nov 24, 2021
Examiner
STANLEY, JEREMY L
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
UNIVERSITY OF SOUTHAMPTON
OA Round
3 (Non-Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
131 granted / 276 resolved
-7.5% vs TC avg
Strong +45% interview lift
Without
With
+44.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Amendment filed on January 20, 2026. Claims 1, 15, and 25 are amended. Claim 24 is cancelled. Claims 1-23 and 25 are pending in the case. Claims 1, 15, and 25 are the independent claims. This action is non-final. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 20, 2026 has been entered. Applicant’s Response In the Amendment filed on January 20, 2026, Applicant amended the claims and provided arguments in response to the rejections of the claims under 35 USC 103 in the previous office action. Response to Argument/Amendment Applicant’s amendments to the claims in response to the rejection of the claims under 35 USC 103 in the previous office action are acknowledged, and Applicant’s associated arguments have been fully considered. Applicant argues that “Cherubini does not describe the feature of element-wise modular addition or ‘each integer element of each of the plurality of hypervectors is representable as a binary number using a fixed number of bits greater than 1….The hypervector of claim 1 corresponds to a vector in which each element of the vector is an integer element that may be written in a binary format….Cherubini does not describe the feature of element-wise modular addition in the context of elements where the element are integer elements representable using a fixed number of bits greater than 1….binary XOR operation is specific to a binary vector….Applicant’s claim 1 is amended to clarify that the vectors are not binary vectors. Therefore, the binary XOR operation of Cherubini does not correspond to the element wise modular addition of Cherubini….Cherubini does not describe or suggest any other binding operations….Cherubini does not describe concatenation but instead describes bundling….the cited prior art, including Fritz, fails to remedy the deficiencies of Cherubini….There is no suggestion in the prior art of using hypervector formed of integer elements….” Examiner notes that the present amendment to the claim merely recites “wherein each integer element of each of the plurality of hypervectors is representable as a binary number using a fixed number of bits greater than 1.” First, while this limitation refers to “each integer element of each of the plurality of hypervectors,” which appears to imply that there may be some integer element in the hypervectors, Examiner notes that this does not appear to be recited in language that explicitly requires (i.e. positively recites) each of the hypervectors to be “a vector in which each element of the vector is an integer element…” as argued by Applicant. Examiner respectfully recommends amending the claims to positively recite that each element of each hypervector is an integer element, if this interpretation of the claims is intended by Applicant. Second, while the limitation recites that each integer element “is representable as a binary number using a fixed number of bits greater than 1.” Examiner notes that this limitation appears to indicate the possibility that the integer elements are representable (i.e. can be, but are not necessarily required to be ) as a binary number using a fixed number of bits greater than 1, but the limitation does not appear to explicitly require (i.e. positively recite) that each integer element is actually represented as a binary number using a fixed number of bits greater than 1. Examiner respectfully recommends amending the claims to positively recite that each integer element is represented as a binary number using a fixed number of bits greater than 1, if this interpretation of the claims is intended by Applicant. To the extent that the amended independent claims may at least imply that the hypervectors include at least some elements which are integer elements, and that these integer elements are represented as a binary number using a fixed number of bits greater than 1, Applicant’s argument appears to be persuasive, and the rejection is withdrawn. However, Applicant’s argument is moot in view of the new grounds of rejection provided below. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “input configured to receive a plurality of hypervectors,” “superposition module configured to concatenate…,” “binding module configured to perform element-wise modular addition…” in claim 15. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102€, (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1-4, 10, 15, 16, 19-21, 23, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Cherubini et al. (US 20200272895 A1) in view of Fritz et al. (US 20200394017 A1), further in view of Cedric Seger. An investigation of categorical variable encoding techniques in machine learning: binary versus one-hot and feature hashing. KTH, School of Electrical Engineering and Computer Science. Independent Thesis Basic Level. 26 October 2018. URN: urn:nbn:se:kth:diva-237426. DiVA, id: diva2:1259073. [accessed on the Internet: https://www.diva-portal.org/smash/get/diva2:1259073/FULLTEXT01.pdf]. (hereinafter “Seger”), further in view of Javier Snaider and Stan Franklin. Modular Composite Representation. Cognitive Computation. Volume 6, pages 510-527. 23 January 2014. [accessed on the Internet: https://link.springer.com/article/10.1007/s12559-013-9243-y]. (hereinafter “Snaider”). With respect to claim 1, Cherubini teaches a computer-implemented method for creating encoded data for use in a cognitive computing system (e.g. paragraph 0008, method for answering cognitive query; paragraph 0013, addressing problem of encoding HD binary vectors), the method comprising: receiving a plurality of hypervectors, each representing a respective semantic object (e.g. paragraph 0014, deriving/generating HD vectors, representing ground state of neural network; paragraph 0015, continuously generating filler HD vectors as long as sensor signals fed to the system; different hyper-vectors for deriving cognitive query and candidate answers, obtained from sensor input signals); performing element-wise modular addition of two or more of the plurality of hypervectors, thereby binding the corresponding semantic objects (e.g. paragraph 0019, combining hyper-vectors using binding operator; paragraph 0027, combining may comprise binding different hyper-vectors by a vector-element-wise binary XOR operation; compare with specification of the instant application at page 2 line 3, using element-wise XOR as binding, and at page 2 lines 28-29, the element-wise modular addition step comprises binding); and performing vector concatenation of two or more of the plurality of hypervectors, thereby superposing the corresponding semantic objects (e.g. paragraph 0019, combining hyper-vectors using bundling operator; paragraph 0028, combining may comprise bundling of different hyper-vectors by an element-wise binary average of the HD vectors). Assuming arguendo that Cherubini does not explicitly disclose performing vector concatenation (i.e. to the extent that the “bundling” of Cherubini cannot be considered to be analogous to “concatenation,” under the broadest reasonable interpretation), Fritz teaches performing vector concatenation (e.g. paragraph 0054, arbitrarily large input vectors; paragraph 0060, stacking J vector and K vector, simply concatenating these two proper bit stacks). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini and Fritz in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), to incorporate the teachings of Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors) to include the capability to perform vector concatenation of two arbitrarily large vectors (e.g. hypervectors). One of ordinary skill would have been motivated to perform such a modification in order to achieve higher speed while saving power, providing high speed, efficient addition of multiple operands, as described in Fritz (paragraph 0003, 0008). Cherubini and Fritz do not explicitly disclose wherein each integer element of each of the plurality of hypervectors is representable as a binary number using a fixed number of bits greater than 1. However, Seger teaches wherein each integer element of each of the plurality of hypervectors is representable as a binary number using a fixed number of bits greater than 1 (e.g. abstract, binary scheme allowing for representing features with log2(d)-dimensional vectors, where d is the dimension associated with one-hot encoding; page 3, second full paragraph, feature with eight unique values represented as a vector with three dimensions (log2(8)), in which categorical values are mapped to integers, but a binary representation of the integer is used; categorical value mapped to integer value of five represented as [1,1,0] in binary vector format (i.e. using at least 3 bits in binary); page 22, first paragraph, compressed representation uses log2 number of bits to represent each feature; i.e. where the number of bits of the vector/binary number are fixed based upon number of values represented by the corresponding feature). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, and Seger in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling) and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format) to include the capability to represent each integer element of the hypervectors as a binary number using a number of bits greater than 1 (as taught by Seger). One of ordinary skill would have been motivated to perform such a modification in order to achieve a compressed representation without explicit loss of information, as described in Seger (page 3, second full paragraph). Assuming arguendo that Cherubini, Fritz, and Seger do not explicitly disclose that each integer element of the plurality of hypervectors is representable using a fixed number of bits, and do not explicitly disclose element-wise modular addition of hypervectors or vector concatenation of hypervectors which have integer elements representable using a fixed number of bits greater than 1, Snaider teaches that each integer element of the plurality of hypervectors is representable using a fixed number of bits, and further teaches element-wise modular addition of hypervectors and vector concatenation of hypervectors which have integer elements representable using a fixed number of bits greater than 1 (e.g. page 510, modular composite representation (MCR) employs long integer vectors for encoding complex structures as high-dimensional vectors; page 515, right column, first two paragraphs, MCR uses large modular integer vectors which have a defined integer range (i.e. and therefore representable using a fixed number of bits) of possible values for each dimension; values of vectors in each dimension employ modular arithmetic; page 520, second full paragraph, describing equivalent vectors grouping operation; page 525, right column, first full paragraph, MCR uses modular integer vectors instead of binary vectors; page 518, second full paragraph, the binding of the modular integer vectors is defined as the modular sum in each dimension, which resembles bitwise XOR used in Spatter Code (i.e. a binary implementation); Examiner notes that one of ordinary skill in the art would understand Snaider as teaching element-wise modular addition and vector concatenation operations for hypervectors of integer elements, respectively corresponding to binding and superposition operations, as is evidenced by Denis Kleyko, Dmitri A. Rachkovskij, Evgeny Osipov, and Abbas Rahimi. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Computing Surveys, Volume 55, Issue 6. Article No.: 130, Pages 1 - 40. 07 December 2022. https://doi.org/10.1145/3538531; see e.g. Table 2 on page 13, row starting with “MCR”, indicating that the MCR model uses “dense integer HVs,” and uses “component-wise modular addition” for binding and “component-wise discretized vector sum” for superposition; Examiner notes that this reference is not relied upon as a prior art reference, but as evidence regarding how one of ordinary skill in the art would understand the teachings of Snaider with respect to the claims of the instant application). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, and Snaider in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), and Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), to incorporate the teachings of Snaider (directed to modular composite representation for high-dimensional vector spaces) to include the capability to represent each integer element of the hypervectors using a fixed number of bits greater than 1, and to further utilize corresponding element-wise modular addition/binding and grouping/superposition operations for the integer hypervectors (as taught by Snaider). One of ordinary skill would have been motivated to perform such a modification in order to balance representational expressiveness with implementational simplicity, as described in Snaider (page 526, left column, first full paragraph of the section “Conclusions and Further Work”). With respect to claim 25, Cherubini teaches a non-transitory computer-readable medium storing a set of instructions that, when executed by a processor, cause the processor (e.g. Cherubini paragraph 0030, computer program product, computer-usable/readable medium providing program code for use in instruction execution system) to: receive a plurality of hypervectors, each representing a respective semantic object (e.g. paragraph 0014, deriving/generating HD vectors, representing ground state of neural network; paragraph 0015, continuously generating filler HD vectors as long as sensor signals fed to the system; different hyper-vectors for deriving cognitive query and candidate answers, obtained from sensor input signals); perform element-wise modular addition of two or more of the plurality of hypervectors, thereby binding the corresponding semantic objects (e.g. paragraph 0019, combining hyper-vectors using binding operator; paragraph 0027, combining may comprise binding different hyper-vectors by a vector-element-wise binary XOR operation; compare with specification of the instant application at page 2 line 3, using element-wise XOR as binding, and at page 2 lines 28-29, the element-wise modular addition step comprises binding); and perform vector concatenation of two or more of the plurality of hypervectors, thereby superposing the corresponding semantic objects (e.g. paragraph 0019, combining hyper-vectors using bundling operator; paragraph 0028, combining may comprise bundling of different hyper-vectors by an element-wise binary average of the HD vectors). Assuming arguendo that Cherubini does not explicitly disclose performing vector concatenation (i.e. to the extent that the “bundling” of Cherubini cannot be considered to be analogous to “concatenation,” under the broadest reasonable interpretation), Fritz teaches performing vector concatenation (e.g. paragraph 0054, arbitrarily large input vectors; paragraph 0060, stacking J vector and K vector, simply concatenating these two proper bit stacks). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini and Fritz in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), to incorporate the teachings of Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors) to include the capability to perform vector concatenation of two arbitrarily large vectors (e.g. hypervectors). One of ordinary skill would have been motivated to perform such a modification in order to achieve higher speed while saving power, providing high speed, efficient addition of multiple operands, as described in Fritz (paragraph 0003, 0008). Cherubini and Fritz do not explicitly disclose wherein each integer element of each of the plurality of hypervectors is representable as a binary number using a fixed number of bits greater than 1. However, Seger teaches wherein each integer element of each of the plurality of hypervectors is representable as a binary number using a fixed number of bits greater than 1 (e.g. abstract, binary scheme allowing for representing features with log2(d)-dimensional vectors, where d is the dimension associated with one-hot encoding; page 3, second full paragraph, feature with eight unique values represented as a vector with three dimensions (log2(8)), in which categorical values are mapped to integers, but a binary representation of the integer is used; categorical value mapped to integer value of five represented as [1,1,0] in binary vector format (i.e. using at least 3 bits in binary); page 22, first paragraph, compressed representation uses log2 number of bits to represent each feature; i.e. where the number of bits of the vector/binary number are fixed based upon number of values represented by the corresponding feature). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, and Seger in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling) and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format) to include the capability to represent each integer element of the hypervectors as a binary number using a number of bits greater than 1 (as taught by Seger). One of ordinary skill would have been motivated to perform such a modification in order to achieve a compressed representation without explicit loss of information, as described in Seger (page 3, second full paragraph). Assuming arguendo that Cherubini, Fritz, and Seger do not explicitly disclose that each integer element of the plurality of hypervectors is representable using a fixed number of bits, and do not explicitly disclose element-wise modular addition of hypervectors or vector concatenation of hypervectors which have integer elements representable using a fixed number of bits greater than 1, Snaider teaches that each integer element of the plurality of hypervectors is representable using a fixed number of bits, and further teaches element-wise modular addition of hypervectors and vector concatenation of hypervectors which have integer elements representable using a fixed number of bits greater than 1 (e.g. page 510, modular composite representation (MCR) employs long integer vectors for encoding complex structures as high-dimensional vectors; page 515, right column, first two paragraphs, MCR uses large modular integer vectors which have a defined integer range (i.e. and therefore representable using a fixed number of bits) of possible values for each dimension; values of vectors in each dimension employ modular arithmetic; page 520, second full paragraph, describing equivalent vectors grouping operation; page 525, right column, first full paragraph, MCR uses modular integer vectors instead of binary vectors; page 518, second full paragraph, the binding of the modular integer vectors is defined as the modular sum in each dimension, which resembles bitwise XOR used in Spatter Code (i.e. a binary implementation); Examiner notes that one of ordinary skill in the art would understand Snaider as teaching element-wise modular addition and vector concatenation operations for hypervectors of integer elements, respectively corresponding to binding and superposition operations, as is evidenced by Denis Kleyko, Dmitri A. Rachkovskij, Evgeny Osipov, and Abbas Rahimi. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Computing Surveys, Volume 55, Issue 6. Article No.: 130, Pages 1 - 40. 07 December 2022. https://doi.org/10.1145/3538531; see e.g. Table 2 on page 13, row starting with “MCR”, indicating that the MCR model uses “dense integer HVs,” and uses “component-wise modular addition” for binding and “component-wise discretized vector sum” for superposition; Examiner notes that this reference is not relied upon as a prior art reference, but as evidence regarding how one of ordinary skill in the art would understand the teachings of Snaider with respect to the claims of the instant application). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, and Snaider in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), and Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), to incorporate the teachings of Snaider (directed to modular composite representation for high-dimensional vector spaces) to include the capability to represent each integer element of the hypervectors using a fixed number of bits greater than 1, and to further utilize corresponding element-wise modular addition/binding and grouping/superposition operations for the integer hypervectors (as taught by Snaider). One of ordinary skill would have been motivated to perform such a modification in order to balance representational expressiveness with implementational simplicity, as described in Snaider (page 526, left column, first full paragraph of the section “Conclusions and Further Work”). With respect to claim 15, Cherubini teaches a cognitive processing unit for use in a cognitive computing system, the cognitive processing unit (e.g. paragraph 0111-0116, invention implemented with virtually any type of computer/computing system having various components) comprising: an input configured to receive a plurality of hypervectors (e.g. paragraph 0014, deriving/generating HD vectors, representing ground state of neural network; paragraph 0015, continuously generating filler HD vectors as long as sensor signals fed to the system; different hyper-vectors for deriving cognitive query and candidate answers, obtained from sensor input signals); a superposition module configured to concatenate two or more of the plurality of hypervectors (e.g. paragraph 0019, combining hyper-vectors using bundling operator; paragraph 0028, combining may comprise bundling of different hyper-vectors by an element-wise binary average of the HD vectors); and a binding module configured to perform element-wise modular addition of two or more of the plurality of hypervectors (e.g. paragraph 0019, combining hyper-vectors using binding operator; paragraph 0027, combining may comprise binding different hyper-vectors by a vector-element-wise binary XOR operation; compare with specification of the instant application at page 2 line 3, using element-wise XOR as binding, and at page 2 lines 28-29, the element-wise modular addition step comprises binding). Assuming arguendo that Cherubini does not explicitly disclose performing vector concatenation (i.e. to the extent that the “bundling” of Cherubini cannot be considered to be analogous to “concatenation,” under the broadest reasonable interpretation), Fritz teaches performing vector concatenation (e.g. paragraph 0054, arbitrarily large input vectors; paragraph 0060, stacking J vector and K vector, simply concatenating these two proper bit stacks). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini and Fritz in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), to incorporate the teachings of Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors) to include the capability to perform vector concatenation of two arbitrarily large vectors (e.g. hypervectors). One of ordinary skill would have been motivated to perform such a modification in order to achieve higher speed while saving power, providing high speed, efficient addition of multiple operands, as described in Fritz (paragraph 0003, 0008). Cherubini and Fritz do not explicitly disclose wherein each integer element of each of the plurality of hypervectors is representable as a binary number using a fixed number of bits greater than 1. However, Seger teaches wherein each integer element of each of the plurality of hypervectors is representable as a binary number using a fixed number of bits greater than 1 (e.g. abstract, binary scheme allowing for representing features with log2(d)-dimensional vectors, where d is the dimension associated with one-hot encoding; page 3, second full paragraph, feature with eight unique values represented as a vector with three dimensions (log2(8)), in which categorical values are mapped to integers, but a binary representation of the integer is used; categorical value mapped to integer value of five represented as [1,1,0] in binary vector format (i.e. using at least 3 bits in binary); page 22, first paragraph, compressed representation uses log2 number of bits to represent each feature; i.e. where the number of bits of the vector/binary number are fixed based upon number of values represented by the corresponding feature). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, and Seger in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling) and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format) to include the capability to represent each integer element of the hypervectors as a binary number using a number of bits greater than 1 (as taught by Seger). One of ordinary skill would have been motivated to perform such a modification in order to achieve a compressed representation without explicit loss of information, as described in Seger (page 3, second full paragraph). Assuming arguendo that Cherubini, Fritz, and Seger do not explicitly disclose that each integer element of the plurality of hypervectors is representable using a fixed number of bits, and do not explicitly disclose element-wise modular addition of hypervectors or vector concatenation of hypervectors which have integer elements representable using a fixed number of bits greater than 1, Snaider teaches that each integer element of the plurality of hypervectors is representable using a fixed number of bits, and further teaches element-wise modular addition of hypervectors and vector concatenation of hypervectors which have integer elements representable using a fixed number of bits greater than 1 (e.g. page 510, modular composite representation (MCR) employs long integer vectors for encoding complex structures as high-dimensional vectors; page 515, right column, first two paragraphs, MCR uses large modular integer vectors which have a defined integer range (i.e. and therefore representable using a fixed number of bits) of possible values for each dimension; values of vectors in each dimension employ modular arithmetic; page 520, second full paragraph, describing equivalent vectors grouping operation; page 525, right column, first full paragraph, MCR uses modular integer vectors instead of binary vectors; page 518, second full paragraph, the binding of the modular integer vectors is defined as the modular sum in each dimension, which resembles bitwise XOR used in Spatter Code (i.e. a binary implementation); Examiner notes that one of ordinary skill in the art would understand Snaider as teaching element-wise modular addition and vector concatenation operations for hypervectors of integer elements, respectively corresponding to binding and superposition operations, as is evidenced by Denis Kleyko, Dmitri A. Rachkovskij, Evgeny Osipov, and Abbas Rahimi. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Computing Surveys, Volume 55, Issue 6. Article No.: 130, Pages 1 - 40. 07 December 2022. https://doi.org/10.1145/3538531; see e.g. Table 2 on page 13, row starting with “MCR”, indicating that the MCR model uses “dense integer HVs,” and uses “component-wise modular addition” for binding and “component-wise discretized vector sum” for superposition; Examiner notes that this reference is not relied upon as a prior art reference, but as evidence regarding how one of ordinary skill in the art would understand the teachings of Snaider with respect to the claims of the instant application). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, and Snaider in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), and Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), to incorporate the teachings of Snaider (directed to modular composite representation for high-dimensional vector spaces) to include the capability to represent each integer element of the hypervectors using a fixed number of bits greater than 1, and to further utilize corresponding element-wise modular addition/binding and grouping/superposition operations for the integer hypervectors (as taught by Snaider). One of ordinary skill would have been motivated to perform such a modification in order to balance representational expressiveness with implementational simplicity, as described in Snaider (page 526, left column, first full paragraph of the section “Conclusions and Further Work”). With respect to claim 19, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches a cognitive computing system comprising the cognitive processing unit of claim 15 (e.g. Cherubini paragraphs 0111-0112, invention implemented with virtually any type of computer including computing system 900, which may be one of various microprocessor systems, etc.). With respect to claim 2, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 1 as previously discussed, and Cherubini further teaches wherein the plurality of hypervectors are generated by an artificial neural network (e.g. paragraph 0014, trained artificial neural network states used to derive HD vectors; generating HD vectors at end of neural network training period, representing ground state of neural network; paragraph 0015, after training HD vectors generated continuously as long as sensor input signals fed to the system; paragraph 0025, generating plurality of HD vectors at end of training of neural network; paragraph 0026, continuously generating plurality of hypervectors based on output data of plurality of neural networks; paragraphs 0058-0059, vectors determined at end of training of system’s neural network, generated using data from hidden layers and output layer; vectors continuously generated during feeding of neural network with sensor input signals). With respect to claim 20, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 19 as previously discussed, and Cherubini further teaches the system further comprising: an artificial neural network configured to generate the plurality of hypervectors received by the cognitive processing unit (e.g. paragraph 0014, trained artificial neural network states used to derive HD vectors; generating HD vectors at end of neural network training period, representing ground state of neural network; paragraph 0015, after training HD vectors generated continuously as long as sensor input signals fed to the system; paragraph 0025, generating plurality of HD vectors at end of training of neural network; paragraph 0026, continuously generating plurality of hypervectors based on output data of plurality of neural networks; paragraphs 0058-0059, vectors determined at end of training of system’s neural network, generated using data from hidden layers and output layer; vectors continuously generated during feeding of neural network with sensor input signals). With respect to claim 3, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 1 as previously discussed, and Cherubini further teaches the method further comprising storing each of the hypervectors created by at least one of: the element-wise modular addition and/or the vector concatenation (e.g. paragraph 0019, hypervectors stored in associative memory; paragraph 0025, vectors stored in memory; paragraph 0026, hypervectors stored in memory; paragraph 0055, storing HD vectors in memory; paragraph 0105, HD vector generated and stored in associative memory). With respect to claim 16, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 15 as previously discussed, and Cherubini further teaches the cognitive processing unit further comprising: one or more buffer arrays configured to temporarily hold at least one of: the received plurality of hypervectors, the hypervectors created by the superposition module, or the binding module (e.g. paragraph 0019, hypervectors stored in associative memory; paragraph 0025, vectors stored in memory; paragraph 0026, hypervectors stored in memory; paragraph 0055, storing HD vectors in memory; paragraph 0105, HD vector generated and stored in associative memory). With respect to claim 23, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 19 as previously discussed, and further teaches the cognitive computing system further comprising: a memory configured to store the hypervector created by the cognitive processing unit (e.g. paragraph 0019, hypervectors stored in associative memory; paragraph 0025, vectors stored in memory; paragraph 0026, hypervectors stored in memory; paragraph 0055, storing HD vectors in memory; paragraph 0105, HD vector generated and stored in associative memory). With respect to claim 4, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 1 as previously discussed, and Cherubini further teaches wherein the method is for creating encoded data for use by an artificial neural network for the purpose of input data classification, and wherein the method further comprises: using, by the artificial neural network, a hypervector created by the element-wise modular addition and/or the vector concatenation, for encoding input data received by the artificial neural network (e.g. paragraph 0003, models for recognizing/classifying unknown data; paragraphs 0013, 0016, 0074, solving/addressing problem of encoding HD vectors for compositional structures directly from sensor data). With respect to claim 21, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 20 as previously discussed, and Cherubini further teaches wherein the artificial neural network is further configured to: encode input data generated by a sensor using a hypervector created by the cognitive processing unit (e.g. paragraph 0003, models for recognizing/classifying unknown data; paragraphs 0013, 0016, 0074, solving/addressing problem of encoding HD vectors for compositional structures directly from sensor data). With respect to claim 10, Cherubini in view of Fritz teaches all of the limitations of claim 1 as previously discussed, and Cherubini further teaches wherein each hypervector consists of one or more subvectors, wherein each subvector has a fixed length y (e.g. paragraph 0056, equal length HD vectors; paragraph 0076, high-dimensional vectors of fixed length; fixed length of vectors for representations). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Burger (US 20190057303 A1). With respect to claim 16, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 16 as previously discussed. Assuming arguendo that Cherubini does not explicitly disclose the cognitive processing unit further comprising: one or more buffer arrays configured to temporarily hold at least one of: the received plurality of hypervectors, the hypervectors created by the superposition module, or the binding module, Burger teaches the cognitive processing unit further comprising: one or more buffer arrays configured to temporarily hold at least one of: the received plurality of hypervectors, the hypervectors created by the superposition module, or the binding module (e.g. paragraph 0060, receiving vector and weights data via FIFOs or other buffers). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Burger in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Burger (directed to a hardware node having a mixed-signal matrix vector unit) to include, in the cognitive processing unit, one or more buffer arrays to temporarily hold the received vectors/hypervectors (as taught by Eliasmith). One of ordinary skill would have been motivated to perform such a modification in order to achieve reduced training times, enable new training scenarios, and train models of unprecedented scale, as described in Burger (paragraph 0019). Claims 5 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Eliasmith et al. (US 20180225570 A1). With respect to claim 5, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 1 as previously discussed. Cherubini and Fritz do not explicitly disclose decoding, by an artificial neural network, a hypervector created by the element-wise modular addition and/or the vector concatenation, to generate output data for use by an output device. However, Eliasmith teaches decoding, by an artificial neural network, a hypervector created by the element-wise modular addition and/or the vector concatenation, to generate output data for use by an output device (e.g. paragraph 0110, decoding semantic pointers; generating semantic pointers by binding two vectors together; paragraph 0118, receiving encoded semantic pointers as input and decoding encoded memory by unbinding; paragraphs 0128-0129, information decoding module extracting semantic pointers; any semantic pointer which can be encoded as described can also be decoded in corresponding operation; motor processing module receiving semantic pointers output from information decoding module and processing them for output to motor output hierarchy). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Eliasmith in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Eliasmith (directed to methods and systems for artificial cognition) to include the capability to decode the hypervectors created by addition/concatenation steps (i.e. such as the semantic pointers of Eliasmith, representing high-dimensional structures via binding of vectors, similar to the hypervectors/HD vectors of Cherubini, created via binding/bundling of vectors) to generate output data for use by an output device (as taught by Eliasmith). One of ordinary skill would have been motivated to perform such a modification in order to reduce the number of dimensions needed to represent high-dimensional structures, as described in Eliasmith (abstract). With respect to claim 22, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 20 as previously discussed. Cherubini and Fritz do not explicitly disclose wherein the artificial neural network is further configured to: generate an output signal for use by an output device by decoding a hypervector created by the cognitive processing unit. However, Eliasmith teaches wherein the artificial neural network is further configured to: generate an output signal for use by an output device by decoding a hypervector created by the cognitive processing unit (e.g. paragraph 0110, decoding semantic pointers; generating semantic pointers by binding two vectors together; paragraph 0118, receiving encoded semantic pointers as input and decoding encoded memory by unbinding; paragraphs 0128-0129, information decoding module extracting semantic pointers; any semantic pointer which can be encoded as described can also be decoded in corresponding operation; motor processing module receiving semantic pointers output from information decoding module and processing them for output to motor output hierarchy). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Eliasmith in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Eliasmith (directed to methods and systems for artificial cognition) to include the capability to decode the hypervectors created by addition/concatenation steps (i.e. such as the semantic pointers of Eliasmith, representing high-dimensional structures via binding of vectors, similar to the hypervectors/HD vectors of Cherubini, created via binding/bundling of vectors) to generate output data for use by an output device (as taught by Eliasmith). One of ordinary skill would have been motivated to perform such a modification in order to reduce the number of dimensions needed to represent high-dimensional structures, as described in Eliasmith (abstract). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Shen et al. (US 20190370652 A1). With respect to claim 17, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 15 as previously discussed. Cherubini and Fritz do not explicitly disclose wherein the superposition module comprises a multiplexer-demultiplexer pair configured to concatenate the two or more of the hypervectors. However, Shen teaches wherein the superposition module comprises a multiplexer-demultiplexer pair configured to concatenate the two or more of the hypervectors (e.g. paragraph 0017, 0074, optimal multiplexer configured to combine plurality of optical input vectors having respective wavelengths into combined optical input vector; further demultiplexing to generate plurality of demultiplexed output voltages/electrical signals; paragraph 0397, improving throughput of artificial neural network (ANN) through parallel processing of input vectors using wavelength division multiplexing; multiplexing/demultiplexing optical signals in common signal using well known structures such as multiplexers and demultiplexers; i.e. a system containing a combination of multiplexing and demultiplexing functionality configured to at least combine/concatenate vectors). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Shen in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Shen (directed to optoelectronic computing systems implementing neural networks) to include the capability to include, in the a multiplexer-demultiplexer pair configured to contatenate/combine the HD vectors (as taught by Shen). One of ordinary skill would have been motivated to perform such a modification in order to enable highly desirable efficient processing of machine learning algorithms to improve energy efficiency and throughput, as described in Fowers (paragraph 0001). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Fowers et al. (US 20180341486 A1). With respect to claim 18, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 15 as previously discussed. Cherubini and Fritz do not explicitly disclose wherein the binding module comprises an add/subtract circuit configured to perform element-wise modular addition or subtraction of the two or more of the hypervectors. However, Fowers teaches wherein the binding module comprises an add/subtract circuit configured to perform element-wise modular addition or subtraction of the two or more of the hypervectors (e.g. paragraph 0059, vector processing operation circuits configured to perform vector-vector operations, such as elementwise vector-vector addition; paragraph 0077, vector processing pipelines, including circuit configured to perform addition/subtraction; paragraph 0081, vector processing operation circuit – add/subtract circuit). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Fowers in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Fowers (directed to multifunction vector processor circuits) to include the capability to include, in the binding module, an add/subtract circuit configured for elementwise vector-vector addition (as taught by Fowers). One of ordinary skill would have been motivated to perform such a modification in order to enable highly desirable efficient processing of machine learning algorithms to improve energy efficiency and throughput, as described in Fowers (paragraph 0001). Claims 6, 7, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of A. Patyk-Łońska, M. Czachor and D. Aerts, "A comparison of geometric analogues of holographic reduced representations, original holographic reduced representations and binary spatter codes," 2011 Federated Conference on Computer Science and Information Systems (FedCSIS), Szczecin, Poland, 2011, pp. 221-228. [retrieved from the Internet: https://ieeexplore.ieee.org/abstract/document/6078300]. (hereinafter Patyk-Łońska). With respect to claim 6, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 1 as previously discussed, and Cherubini teaches wherein the plurality of hypervectors comprises a first set of one or more hypervectors, each representing a respective pointer semantic object, and a second set of hypervectors, each representing a respective filler semantic object (e.g. paragraph 0015, combining HD role and filler vectors; paragraph 0059, filler vectors and role vectors; paragraphs 0102-0103, Fig. 6, building role vectors and filler vectors). Cherubini and Fritz do not explicitly disclose wherein the first set of hypervectors comprises one or more invertible hypervectors and the second set of hypervectors comprises one or more invertible or non-invertible hypervectors. However, Patyk-Łońska teaches wherein the first set of hypervectors comprises one or more invertible hypervectors and the second set of hypervectors comprises one or more invertible or non-invertible hypervectors (e.g. page 221, first column, first paragraph of “Introduction” section, discussing representations of cognitive structures with binding of role-filler codevectors; page 222, first column, discussing inverses of vectors, i.e. a vector is invertible/possesses an inverse if its magnitude is nonzero; geometric product of an arbitrary number of invertible vectors is also invertible; page 224, first column, discussing a decoding vector which is an inverse of a role vector; i.e. the set of vectors includes a subset of role/pointer vectors and a subset of filler vectors, where at least the role/pointer vectors are invertible (i.e. have an inverse), and the filler vectors may or may not be invertible). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Patyk-Łońska in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Patyk-Łońska (directed to geometric analogues of holographic reduce representations, original holographic reduced representations, and binary spatter codes, used for representing cognitive structures) to include the capability to include, as the subset of hypervectors representing role/pointer objects, vectors which are invertible, and as the subset of hypervectors representing filler objects, vectors which may or may not be invertible (as taught by Patyk-Łońska). One of ordinary skill would have been motivated to perform such a modification in order to define a hierarchy of associative, non-commutative, and invertible operations in which the resulting superpositions are less noisy than ones based on convolutions, and in which nonzero vectors are invertible and therefore possess an important property for unbinding and recognition, as described in Patyk-Łońska (page 222, second column, final paragraph). With respect to claim 11, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 10 as previously discussed, and Cherubini teaches wherein the plurality of hypervectors comprises one or more of a first type of hypervectors, each representing a respective pointer semantic object, and one or more of a second type of hypervectors, each representing a respective filler semantic object; and each hypervector representing a pointer semantic object or a filler semantic object consists of one subvector (e.g. paragraph 0015, combining HD role and filler vectors; paragraph 0059, filler vectors and role vectors; paragraphs 0102-0103, Fig. 6, building role vectors and filler vectors). Cherubini and Fritz do not explicitly disclose that the first type of hypervectors are invertible, or that the second type of hypervectors are invertible or non-invertible. However, Patyk-Łońska teaches that the first type of hypervectors are invertible, or that the second type of hypervectors are invertible or non-invertible (e.g. page 221, first column, first paragraph of “Introduction” section, discussing representations of cognitive structures with binding of role-filler codevectors; page 222, first column, discussing inverses of vectors, i.e. a vector is invertible/possesses an inverse if its magnitude is nonzero; geometric product of an arbitrary number of invertible vectors is also invertible; page 224, first column, discussing a decoding vector which is an inverse of a role vector; i.e. the set of vectors includes a subset of role/pointer vectors and a subset of filler vectors, where at least the role/pointer vectors are invertible (i.e. have an inverse), and the filler vectors may or may not be invertible). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Patyk-Łońska in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Patyk-Łońska (directed to geometric analogues of holographic reduce representations, original holographic reduced representations, and binary spatter codes, used for representing cognitive structures) to include the capability to include, as the subset of hypervectors representing role/pointer objects, vectors which are invertible, and as the subset of hypervectors representing filler objects, vectors which may or may not be invertible (as taught by Patyk-Łońska). One of ordinary skill would have been motivated to perform such a modification in order to define a hierarchy of associative, non-commutative, and invertible operations in which the resulting superpositions are less noisy than ones based on convolutions, and in which nonzero vectors are invertible and therefore possess an important property for unbinding and recognition, as described in Patyk-Łońska (page 222, second column, final paragraph). With respect to claim 7, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Patyk-Łońska teaches all of the limitations of claim 6 as previously discussed, and Cherubini and Patyk-Łońska further teach wherein the method is for extracting information from a cognitive computing system, wherein performing element-wise modular addition comprises: binding a filler semantic object to a pointer base item, thereby creating a first hypervector, and extracting the hypervector representing the filler semantic object from the first hypervector by binding the first hypervector with the inverse of the hypervector representing the pointer base item (e.g. Cherubini paragraph 0015, using combined HD role and filler vectors representing candidate answers to derive response to a query; paragraph 0077, reduced representations and cognitive models manipulated with binding and bundling functions for binary vectors; Patyk-Łońska page 221, first column, “Introduction” section, representations of cognitive structures, binding of role-filler codevectors; bound n-tuples superposed by addition; unbinding performed by inverse; page 22, second column, final paragraph, vectors being invertible important for unbinding and recognition; page 224, first column, discussing use of a decoding vector which is an inverse of a role vector). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Patyk-Łońska in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Patyk-Łońska (directed to geometric analogues of holographic reduce representations, original holographic reduced representations, and binary spatter codes, used for representing cognitive structures) to include the capability to bind the role and filler vectors to create a hypervector (as taught by both Cherubini and Patyk-Łońska) and to further extract the filler by performing the inverse operation/unbinding (as taught by Patyk-Łońska). One of ordinary skill would have been motivated to perform such a modification in order to define a hierarchy of associative, non-commutative, and invertible operations in which the resulting superpositions are less noisy than ones based on convolutions, and in which nonzero vectors are invertible and therefore possess an important property for unbinding and recognition, as described in Patyk-Łońska (page 222, second column, final paragraph). Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Stephens (US 20180203699 A1). With respect to claim 8, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 1 as previously discussed, and Cherubini teaches wherein each hypervector has a maximum allowable length n (e.g. paragraph 0056, equal length HD vectors; paragraph 0076, high-dimensional vectors of fixed length; fixed length of vectors for representations; i.e. where a fixed length for hypervectors is analogous to those hypervectors having a maximum allowable length (i.e. the length is fixed such that it may not exceed the fixed length) of some arbitrary amount/number such as “n”). Cherubini and Fritz do not explicitly disclose wherein the performing vector concatenation comprises raising an exception or a flag if the length of the hypervector created by the performing vector concatenation exceeds n. However, Stephens teaches wherein the performing vector concatenation comprises raising an exception or a flag if the length of the hypervector created by the performing vector concatenation exceeds n (e.g. paragraph 0026, vector operands used in executing vector program instructions such as arithmetic/logic instructions; vector having operant bit size of 512 bits and containing eight vector elements each having a vector element size of 64 bits; paragraph 0028, vector operands provided limited maximum bit sizes, such as 1024 bits, 128 bits, etc.; operating system validated to operate correctly up to maximum operand bit size may wish to constrain application programs to not exceed the maximum vector operand bit size; paragraph 0029, hierarchy of exception level states defining corresponding maximum vector operand bit sizes for each exception level, such as 256 bits, 512 bits, etc.; imposing limit upon vector operand size; paragraph 0030, when vector operand bit size dependent instruction executed, vector operant bit size employed is controlled so as to perform the processing with the vector operand bit size governed by limit value of currently selected exception level state and any programmable limit value; vector program instruction will normally use a vector operand bit size which has a largest value permitted; paragraph 0031, permitted operand bit size querying instruction serves to return vector operand bit size indicating value constrained by the exception level; the permitted vector operand bit size querying instruction allows software executing as particular exception level state to determine a maximum vector operand bit size it may use, to set its own programmable limit value or modify some other aspect of its behavior; paragraph 0038, software executing a particular exception level state determining which vector operand bit sizes are not supported; paragraph 0039, determining whether specified programmable limit value being written is supported; performing additional checks to ensure written value does not conflict with programmable limit values; paragraph 0040, if supported, writing specified programmable limit value; if not supported, rounded limit value written; paragraph 0041, using programmable limit values to dynamically change the vector operand bit size in use; bottom exception level state not permitted to control its own vector operand bit size; paragraph 0042, increasing vector operand bit size, such as in response to change of limit value, exception level state, etc.); i.e. the system determines whether the vector operation/instruction involves a vector having a size/length which exceeds a maximum size, and provides a corresponding response/signal/exception/flag based on this determination (including to modify system behavior for the operation as appropriate, either constraining the associated program to not exceed the maximum size, changing the exception level/size, etc.)). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Stephens in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Stephens (directed to vector operand bitsize control) to include the capability to determine whether the vector operands utilized by the system in conjunction with various vector operations exceed a maximum allowable length/size and, in response, provide a corresponding signal/output/flag/exception, such as to constrain the associated program to not exceed the maximum size, to change the maximum size, etc. (as taught by Stephens). One of ordinary skill would have been motivated to perform such a modification in order to permit software to be used without significant modification dependent upon implementation limited vector operand bit size of a particular processor used to execute the software, and to allow software at a higher level of privilege to constrain vector operand bit sizes used by software executing at lower levels, while avoiding application programs producing undesired behavior and helping to provide deterministic behavior of the processing system, as described in Stephens (paragraph 0028, 0042-0043). With respect to claim 9, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Stephens teaches all of the limitations of claim 8 as previously discussed, and Stephens further teaches wherein n is a power of 2 (e.g. paragraph 0035, mappings of programmable values of vector operand bit sizes they specify, such as specifying vector operand size as a power of 2). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Stephens in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Stephens (directed to vector operand bitsize control) to include the capability to specify, as the maximum vector bit size/length, a value which is a power of 2 (as taught by Stephens). One of ordinary skill would have been motivated to perform such a modification in order to permit software to be used without significant modification dependent upon implementation limited vector operand bit size of a particular processor used to execute the software, and to allow software at a higher level of privilege to constrain vector operand bit sizes used by software executing at lower levels, while avoiding application programs producing undesired behavior and helping to provide deterministic behavior of the processing system, as described in Stephens (paragraph 0028, 0042-0043). Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Mohsen Imani, Sahand Salamat, Saransh Gupta, Jiani Huang, and Tajana Rosing. 2019. FACH: FPGA-based acceleration of hyperdimensional computing by reducing computational complexity. In Proceedings of the 24th Asia and South Pacific Design Automation Conference (ASPDAC '19). Association for Computing Machinery, New York, NY, USA, 493–498. https://doi.org/10.1145/3287624.3287667 [retrieved from the Internet: https://dl.acm.org/doi/abs/10.1145/3287624.3287667 ]. (hereinafter Mohsen). With respect to claim 12, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider teaches all of the limitations of claim 1 as previously discussed. Cherubini and Fritz do not explicitly disclose wherein each element of each hypervector is an integer in the range from 0 to p-1. However, Mohsen teaches wherein each element of each hypervector is an integer in the range from 0 to 1-p (e.g. page 1, second column, third paragraph, HD can use binarized class hypervectors (0 and 1), or hypervectors with non-binary elements; page 2, second column, first paragraph, each element of class hypervector can have a non-binarized value, such that reasoning task is performed using integer rather than binary values; page 2, second column, second paragraph, to reduce computational cost, binarizing class elements; accuracy-efficiency tradeoffs using binarized or non-binarized hypervectors; page 2, second column, third paragraph, HD with binary model provides significantly low classification accuracy compared to non-binary model; for face recognition, HD using non-binarized class elements provides 57.8% higher accuracy than HD using binarized hypervectors; i.e. each hypervector may be comprised of binarized elements, such as integer elements in the range from 0 to 1 (i.e. in a case where p itself is zero such that the range of 0 to 1-p = 0 to 1-0, and therefore 0 to 1), or may be comprised of non-binarized elements, such as integer elements within an arbitrary range (i.e. including a range encompassing 0 to any other endpoint which may be arbitrarily defined as 1-p or p-1 (such as an endpoint of 32, defined as either 1-(-31) or 33-1, or some other endpoint)). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Mohsen in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Mohsen (directed to acceleration of hyperdimensional computing) to include the capability to implement each hypervector as a hypervector of integer elements having a range from 0 to an arbitrary value, such as 1-p or p-1 (as taught by Mohsen). One of ordinary skill would have been motivated to perform such a modification in order to either reduce computational cost or achieve greater accuracy, as described in Mohsen (page 2, second column, second and third paragraphs). With respect to claim 13, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Mohsen teaches all of the limitations of claim 12 as previously discussed, and Mohsen further teaches wherein p is a prime number (e.g. page 1, second column, third paragraph, HD can use binarized class hypervectors (0 and 1), or hypervectors with non-binary elements; page 2, second column, first paragraph, each element of class hypervector can have a non-binarized value, such that reasoning task is performed using integer rather than binary values; page 2, second column, second paragraph, to reduce computational cost, binarizing class elements; accuracy-efficiency tradeoffs using binarized or non-binarized hypervectors; page 2, second column, third paragraph, HD with binary model provides significantly low classification accuracy compared to non-binary model; for face recognition, HD using non-binarized class elements provides 57.8% higher accuracy than HD using binarized hypervectors; i.e. each hypervector may be comprised of binarized elements, such as integer elements in the range from 0 to 1 (i.e. in a case where p itself is two (a prime number) such that the range of 0 to p-1 = 0 to 2-1, and therefore 0 to 1), or may be comprised of non-binarized elements, such as integer elements within an arbitrary range which may be defined using values of p and 1 (i.e. including a range encompassing 0 to any other endpoint which may be arbitrarily defined as 1-p or p-1 (such as an endpoint of -1, where 1-p is used and p is 2, an endpoint of 2, where p-1 is used and p is 3, etc.), or some other endpoint)). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Mohsen in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Mohsen (directed to acceleration of hyperdimensional computing) to include the capability to implement each hypervector as a hypervector of integer elements having a range from 0 to an arbitrary value, such as 1-p or p-1, where p may be a prime number, such as 2 or 3 (as taught by Mohsen and discussed above). One of ordinary skill would have been motivated to perform such a modification in order to either reduce computational cost or achieve greater accuracy, as described in Mohsen (page 2, second column, second and third paragraphs). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Mohsen, further in view of Stephens. With respect to claim 14, Cherubini in view of Fritz, further in view of Seger, further in view of Snaider, further in view of Mohsen teaches all of the limitations of claim 12 as previously discussed, and Cherubini and Mohsen further teaches wherein: each hypervector consists of one or more subvectors, wherein each subvector has a fixed length y (e.g. Cherubini paragraph 0056, equal length HD vectors; paragraph 0076, high-dimensional vectors of fixed length; fixed length of vectors for representations); and one or both of y and p are powers of 2 (e.g. Mohsen page 1, second column, third paragraph, HD can use binarized class hypervectors (0 and 1), or hypervectors with non-binary elements; page 2, second column, first paragraph, each element of class hypervector can have a non-binarized value, such that reasoning task is performed using integer rather than binary values; page 2, second column, second paragraph, to reduce computational cost, binarizing class elements; accuracy-efficiency tradeoffs using binarized or non-binarized hypervectors; page 2, second column, third paragraph, HD with binary model provides significantly low classification accuracy compared to non-binary model; for face recognition, HD using non-binarized class elements provides 57.8% higher accuracy than HD using binarized hypervectors; i.e. each hypervector may be comprised of binarized elements, such as integer elements in the range from 0 to 1 (i.e. in a case where p itself is two (a power of 2) such that the range of 0 to p-1 = 0 to 2-1, and therefore 0 to 1), or may be comprised of non-binarized elements, such as integer elements within an arbitrary range which may be defined using values of p and 1 (i.e. including a range encompassing 0 to any other endpoint which may be arbitrarily defined as 1-p or p-1 (such as an endpoint of -1, where 1-p is used and p is 2, an endpoint of 7, where p-1 is used and p is 8, etc.), or some other endpoint)). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, and Mohsen in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Mohsen (directed to acceleration of hyperdimensional computing) to include the capability to implement each hypervector as a hypervector of integer elements having a range from 0 to an arbitrary value, such as 1-p or p-1, where p may be a power of 2 (as taught by Mohsen and discussed above). One of ordinary skill would have been motivated to perform such a modification in order to either reduce computational cost or achieve greater accuracy, as described in Mohsen (page 2, second column, second and third paragraphs). In addition, assuming arguendo that Cherubini, Fritz, and Mohsen do not explicitly disclose that the fixed length y of the vector is a power of 2, Stephens teaches that the fixed length y of the vector is a power of 2 (e.g. paragraph 0035, mappings of programmable values of vector operand bit sizes they specify, such as specifying vector operand size as a power of 2). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Cherubini, Fritz, Seger, Snaider, Mohsen, and Stephens in front of him to have modified the teachings of Cherubini (directed to answering cognitive queries from sensor input signals, based on vector operations including binding and bundling), Mohsen (directed to acceleration of hyperdimensional computing), Seger (directed to categorical variable encoding techniques in machine learning, such as encoding of feature values in binary vector format), Snaider (directed to modular composite representation for high-dimensional vector spaces), and Fritz (directed to fast binary counters based on symmetric stacking, such as for stacking/concatenating vectors), to incorporate the teachings of Stephens (directed to vector operand bitsize control) to include the capability to specify, as the maximum vector bit size/length, a value which is a power of 2 (as taught by Stephens). One of ordinary skill would have been motivated to perform such a modification in order to permit software to be used without significant modification dependent upon implementation limited vector operand bit size of a particular processor used to execute the software, and to allow software at a higher level of privilege to constrain vector operand bit sizes used by software executing at lower levels, while avoiding application programs producing undesired behavior and helping to provide deterministic behavior of the processing system, as described in Stephens (paragraph 0028, 0042-0043). It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L STANLEY whose telephone number is (469)295-9105. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar, can be reached at telephone number (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /JEREMY L STANLEY/ Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Nov 24, 2021
Application Filed
Apr 05, 2025
Non-Final Rejection — §103
Jul 10, 2025
Response Filed
Oct 14, 2025
Final Rejection — §103
Jan 20, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591827
ETHICAL CONFIDENCE FABRICS: MEASURING ETHICAL ALGORITHM DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580783
CONFIGURING 360-DEGREE VIDEO WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572266
ACCESSING AND DISPLAYING INFORMATION CORRESPONDING TO PAST TIMES AND FUTURE TIMES
2y 5m to grant Granted Mar 10, 2026
Patent 12561041
Systems, Methods, and Graphical User Interfaces for Interacting with Virtual Reality Environments
2y 5m to grant Granted Feb 24, 2026
Patent 12555684
ASSESSING A TREATMENT SERVICE BASED ON A MEASURE OF TRUST DYNAMICS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
92%
With Interview (+44.7%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month