Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered.
Response to Amendment
The amendments filed 12/22/2025 have been entered.
Claims 1, 3-12, and 14 remain pending within the application.
The amendments filed 12/22/2025 necessitate new grounds of 112(b) rejections that were also addressed in the original office action mailed 02/26/2025. See 112(b) rejection for improper Markush language below.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“a storage unit for storing said radius value” in claim 1,“a storage unit storing said radius value” in claim 11, and “a storage unit storing the radius value” in claim 14. See 35 U.S.C. 112 rejection below for further comments.
“a computation unit configured to… determine a predicted number of lattice points“ in claims 1, 11, and 14. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 12, lines 28 to page 13, line 2 “the machine learning algorithm may be a supervised machine learning algorithm that maps input data to predicted data using a function that is determined based on labeled training data that consists of a set of labeled input-output pairs. Exemplary supervised machine learning algorithms comprise, without limitation, Support Vector Machines (SVM), linear regression, logistic regression, naive Bayes, linear discriminant analysis, decision trees, k-nearest neighbor algorithm, neural networks, and similarity learning.”).
"the computation unit is configured to perform a QR decomposition" in claims 1, 11, and 14. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 12, lines 16-18 “ the computation unit 201 may be configured to perform a QR decomposition to the lattice generator matrix M = QR”),
“computation unit being configured to determine said input data” in claims 1, 11, and 14. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 12, lines 18-21 “The computation unit 201 may be configured to determine input data … by performing multiplication operation between each component of the upper triangular matrix and the inverse of the radius value.” ),
“computation unit is configured to determine said model parameters during the training phase” in claim 6. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 17, lines 12-14 “the computation unit 201 may be configured to determine (update or adjust) the model parameters during a training phase in mini-batches extracted from the received training data.”),
“computation unit being configured to determine a plurality of sets of training data from said training data and expected numbers of lattice points” in claim 6. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 17, lines 15-16 “the computation unit 201 may be configured to partition the received training data into a plurality NB of sets of training data”),
“computation unit being configured to: process said deep neural network using a set of training data” in claim 6. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 18, lines 1-2 “the computation unit 201 may be configured to: process the deep neural network using a mini-batch among the plurality of training sets as input”),
“computation unit being configured to: …determine a loss function from the expected number of lattice points and the intermediate number of lattice points associated with said set of training data” in claim 6. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 18, lines 5-7 “the computation unit 201 may be configured to… compute a loss function denoted L… for the processed mini-batch… from the expected number of lattice points… associated with the mini-batch and the intermediate number of lattice points …determined by processing the mini-batch of data”)
“computation unit being configured to: …determine updated model parameters by applying an optimization algorithm according to the minimization of said loss function” in claim 6. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 18, lines 9-11 “the computation unit 201 may be configured to: …determine updated model parameters after processing the mini-batch …according to the minimization of the loss function L”)
“the computation unit is configured to determine initial model parameters” in claim 9. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 18, lines 17-19 “the computation unit 201 may be configured to determine initial model parameters that will be used during the forward propagation phase of the first processing iteration of the training process”),
“computation unit is configured to determine said expected numbers of lattice points” in claim 10. This element is interpreted under 35 U.S.C. 112(f) as a processor (Figure 1 and Specification page 24, lines 28-32 “For a hardware implementation, the processing elements of the lattice prediction device 200 can be implemented for example according to a hardware-only configuration (for example in one or more FPGA, ASIC, or VLSI integrated circuits with the corresponding memory) or according to a configuration using both VLSI and Digital Signal Processor (DSP). Furthermore, the method described herein can be implemented by computer program instructions supplied to the processor of any type of computer to produce a machine with a processor that executes the instructions to implement the functions/acts specified herein.”), with the algorithm described in the specification (page 19, lines 10-13 “the computation unit 201 may be configured to previously determine the expected numbers of lattice points associated with each mini-batch S for l = 1,...,NB from the radius value r and the lattice generator matrix M by applying a list sphere decoding algorithm or a list SB-Stack decoding algorithm.”).
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3-10 are rejected under 35 U.S.C. 112(b) as indefinite on the basis that they contain Markush groupings that require materials selected from an open list of alternatives.
As per MPEP § 2173.05(h), if a Markush grouping requires a material selected from an open list of alternatives (e.g., selected from the group "comprising" or "consisting essentially of" the recited alternatives), the claim should generally be rejected under 35 U.S.C. 112(b) as indefinite because it is unclear what other alternatives are intended to be encompassed by the claim. See In re Kiely, 2022 USPQ2d 532 at 2* (Fed. Cir. 2022), (each independent claim recites "a selection from the group comprising a person, an animal, an animated character, a creature, an alien, a toy, a structure, a vegetable, and a fruit." … (emphasis added). "Given the breadth of variation among the specified alternatives and the use of the open-ended word ’comprising’ to define the scope of the list, we affirm the Board's conclusion that the pending claims recite improper Markush language and are indefinite under § 112(b).").
Claims 3, 5, 7, and 8 recite the following improper Markush language, because they include the open-ended word ‘comprising’:
“chosen in a group comprising Support Vector Machines, linear regression, logistic regression, naive Bayes, linear discriminant analysis, decision trees, k-nearest neighbor algorithm, neural networks, and similarity learning” in claim 3,
“chosen in a group comprising a linear activation function, a sigmoid function, a Relu function, the Tanh, the softmax function, and the CUBE function” in claim 5,
“chosen in a group comprising the Adadelta optimization algorithm, the Adagrad optimization algorithm, the adaptive moment estimation algorithm, the Nesterov accelerated gradient algorithm, the Nesterov-accelerated adaptive moment estimation algorithm, the RMSprop algorithm, stochastic gradient optimization algorithms, and adaptive learning rate optimization algorithms” in claim 7, and
“chosen in a group comprising a mean square error function and an exponential log likelihood function” in claim 8.
Here, for each of the groupings listed above, “chosen in a group comprising” means the subsequently listed alternatives in each claim are selected from an open list, and thus it is unclear what other alternatives are intended to be encompassed by the claims.
Dependent claims 4-10 inherit the deficiency and therefore are rejected on the same basis.
Claims 1, 3-10, 11-12, and 14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim limitations “a storage unit for storing said radius value” in claim 1,“a storage unit storing said radius value” in claim 11, and “a storage unit storing the radius value” in claim 14 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The corresponding structure described in the specification as performing the claimed function, and equivalents thereof is found in Figure 1 and Specification page 12, lines 3-6 “The lattice prediction device 200 may comprise a storage unit 203 configured to store the radius value r and the lattice generator matrix M and load their values to the computation unit 201.”. The disclosure does not provide adequate structure, beyond the recitation of a storage unit, to perform the claimed function of “storing said radius value” in independent claims 1, 11, and 14. The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. (FP 7.31.01.) Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Dependent claims 3-10 and claim 12 inherit the deficiency from claims 1 and 11 respectively and therefore are rejected on the same basis.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 3-10, 11-12, and 14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
As described above, the disclosure does not provide adequate structure to perform the claimed function of storing a radius value in independent claims 1, 11 and 14. The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. (FP 7.31.01). Dependent claims 3-10 and 12 inherit the deficiency from claim 1 and claim 11 respectively and therefore are rejected on the same basis.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Mohammadkarimi et al. ("Deep Learning Based Sphere Decoding"), hereafter Mdkarimi, in view of Deilami et al. (US 2007/0286313 A1), hereafter Deilami, in further view of Kim et al. (US 2008/0313252 A1), hereafter Kim.
Regarding claim 1, Mdkarimi discloses:
A communication system for communicating information in a digital form comprising a lattice prediction device for … lattice points falling inside a bounded region in a given vector space (Mdkarimi, page 2, right column, final paragraph, line 1 “We consider a spatial multiplexing MIMO system” and page 3, left column, last paragraph, lines 1-3 “In the proposed DL-based sphere decoding, the Euclidean distance of the q closest lattice points to vector y in the skewed lattice space is reconstructed” teaches a communication system comprising a prediction device for predicting lattice points falling inside a bounded region in a given vector space),
said bounded region being defined by a radius value, (Mdkarimi, page 2, left column, second paragraph, lines 2-3 “the radius of the decoding hypersphere is learned”),
a lattice point representing a digital signal in a lattice constructed over said vector space (Mdkarimi, page 2, right column, last paragraph, lines 1-4 “spatial multiplexing MIMO system with m transmit and n receive antennas. The vector of received baseband symbols, y…, in block-fading channels is modeled” and page 3, left column, second paragraph, lines 1-3 “The vector s spans the ‘rectangular’ m-dimensional complex integer lattice D…, and the n-dimensional vector Hs spans a ‘skewed’ lattice” teaches lattices constructed over vector spaces that represent digital signals),
said lattice being defined by a lattice generator matrix comprising components (Mdkarimi, page 3, left column, second paragraph, lines 3-4 “lattice-generating matrix H” teaches the lattice being defined by lattice generating matrix H),
the lattice prediction device comprises: … radius value defining said bounded region (Mdkarimi, page 3, left column, paragraph 3, line 3 “a hypersphere of radius d” and page 4, right column, first paragraph, lines 2-4 “For each input training vector x(i), … employing SDIRS with a set of heuristic radiuses” and teaches a radius value defining said bounded region),
determine a predicted … of lattice points that fall inside said bounded region by applying a machine learning algorithm using a deep neural network associated with model parameters and an activation function, the model parameters being determined during a training phase, (As per Claim interpretation of a computation unit configured to determine a predicted number of lattice points above: Mdkarimi, Fig. 1, equation 2, and page 2, right column, paragraph below equation 2, lines 1-3 “denotes the set of parameters … Al is the activation function” and page 2, left column, last paragraph, lines 2-3 “processing units”, page 3, left column , final paragraph, lines 2-4 “q closest lattice points to vector y in the skewed lattice space is reconstructed via a DNN (as the DNN output) prior to sequential sphere decoding implementations”, and teaches a computation unit, i.e., a processing unit configured to determine predicted lattice points by applying a machine learning algorithm using the DNN in Fig. 1),
the machine learning algorithm taking as input an input data vector comprising inputs determined from said radius value and said components of lattice generator matrix and delivering as output said predicted … lattice points (Mdkarimi, Fig. 1, algorithm 1, and page 4, right column, first paragraph, lines 2-4 “For each input training vector x(i), the corresponding desired radius vector r(i) is obtained by employing SDIRS with a set of heuristic radiuses” and page 3, left column, final paragraph, lines 2-4 “q closest lattice points to vector y in the skewed lattice space is reconstructed via a DNN (as the DNN output) prior to sequential sphere decoding implementations” teaches applying a machine learning algorithm using the DNN in Fig. 1, where input data is derived from heuristic radius values of hyperspheres and components of lattice generator matrix, and the DNN output is the predicted lattice points),
the computation unit is configured to perform … to said lattice generator matrix (As per Claim interpretation of the computation unit is configured to perform a QR decomposition above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 3, left column, second paragraph, lines 3-4 “a ‘skewed’ lattice for any given lattice-generating matrix H” teaches a computation unit configured to perform operations on a lattice-generating matrix H to obtain a skewed lattice),
computation unit being configured to determine said input data by performing a multiplication operation between each component of … matrix and an inverse of said radius value (As per Claim interpretation of the computation unit being configured to determine said input data above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 4, algorithm 1 teaches determining input data by performing multiplication operations between components of matrices and transposed radius value as inverse of radius value),
said predicted … lattices being used for a detection of coded or uncoded transmitted signals by the communication system (Mdkarimi, page 2, right column, final paragraph, lines 1-2 “We consider a spatial multiplexing MIMO system with m transmit and n receive antennas” and page 3, left column, final paragraph, last two lines to right column, first line “Then, these q learned Euclidean distances are used as radiuses of the hyperspheres in sphere decoding implementations” teaches using predicted lattices for detection of transmitted signals by a communication system).
Mdkarimi teaches predicting radiuses of q closest lattice points by applying a machine learning algorithm, and said predicted … lattices being used for a detection of coded or uncoded transmitted signals by the communication system, but does NOT teach that prediction to be predicting a number of lattice points.
Deilami teaches:
predicting a … number of lattice points (Deilami, paragraph [0032], lines 1-6 “to implement lattice searching inside a sphere…enumerate the lattice points inside a sphere.” teaches predicting a number of lattice points).
Mdkarimi and Deilami are analogous art because they are from the same field of endeavor, lattice point estimation and sphere decoding.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mdkarimi to include predicting a number of lattice points, based on the teachings of Deilami. The motivation for doing so would have been to improve the performance (Deilami, paragraph [0006], last two lines “this method can work in an iterative fashion to improve the performance").
While Mdkarimi discloses radius value defining said bounded region, they do not explicitly disclose a storage unit for storing … radius values defining … bounded region.
Deilami teaches:
a storage unit for storing … radius values defining … bounded region (Deilami, ¶[0064] and ¶[0032] teaches a storage unit for storing radius values defining a bounded region of a sphere).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mdkarimi to include a storage unit for storing … radius values defining … bounded region, based on the teachings of Deilami. The motivation for doing so would have been to improve the performance (Deilami, paragraph [0006], last two lines “this method can work in an iterative fashion to improve the performance").
Mdkarimi teaches the lattice generator matrix and component matrices, but does not explicitly teach:
perform a QR decomposition to said lattice generator matrix, which provides an upper triangular matrix.
Kim teaches:
perform a QR decomposition to said lattice generator matrix, which provides an upper triangular matrix (as per Claim interpretation of the computation unit is configured to perform a QR decomposition cited above: Kim, paragraph [0045], lines 3-5 “from the received signal vector as an initial estimate, performs QR-decomposition (QRD) of the channel matrix H, and obtains Q and R matrices in step S310.” teaches performing QR decomposition to a lattice generator matrix to obtain upper triangular matrix R) (Examiner’s Note: for prior art purposes, the examiner interprets the R matrix generated from QR decomposition to be an upper triangular matrix).
Mdkarimi, Deilami, and Kim are analogous art because they are from the same field of endeavor, lattice point estimation and sphere decoding.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mdkarimi, in view of Deilami, to include performing a QR decomposition to said lattice generator matrix to obtain an upper triangular matrix, based on the teachings of Kim. The motivation for doing so would have been to reduce the complexity (Kim, paragraph [0004], line 7 “reducing the complexity").
Regarding claim 3, Mdkarimi, in view of Deilami, in further view of Kim, discloses the communication system of claim 1. Mdkarimi further discloses:
wherein the machine learning algorithm is a supervised machine learning algorithm chosen in a group comprising Support Vector Machines, linear regression, logistic regression, naive Bayes, linear discriminant analysis, decision trees, k-nearest neighbor algorithm, neural networks, and similarity learning (Mdkarimi, page 4, algorithm 1 teaches the machine learning algorithm to be a supervised algorithm, more specifically a deep neural network).
Regarding claim 4, Mdkarimi, in view of Deilami, in further view of Kim, discloses the communication system of claim 3. Mdkarimi further discloses:
wherein the deep neural network is a multilayer deep neural network comprising an input layer, one or more hidden layers, and an output layer, each layer comprising a plurality of computation nodes (Mdkarimi, page 2, Fig. 1),
said multilayer deep neural network being associated with model parameters and an activation function, the activation function of the multilayer deep neural network being implemented in at least one computation node among the plurality of computation nodes of said one or more hidden layers (Mdkarimi, page 2, equation 2 and right column, paragraph below equation 2, lines 1-3 “
PNG
media_image1.png
31
137
media_image1.png
Greyscale
denotes the set of parameters … Al is the activation function”).
Regarding claim 5, Mdkarimi, in view of Deilami, in further view of Kim, discloses the communication system of claim 4. Mdkarimi further discloses:
wherein said activation function is chosen in a group comprising a linear activation function, a sigmoid function, a Relu function, a Tanh, a softmax function, and a CUBE function (Mdkarimi, page 3, right column, last paragraph, lines 3-5 “Clipped rectified linear unit with the following mathematical operation is used as the activation function in the hidden layers” and equation 11 teaches a ReLU activation function).
Regarding claim 6, Mdkarimi, in view of Deilami, in further view of Kim, discloses the communication system of claim 4 and the computation unit. Mdkarimi further discloses:
wherein the computation unit is configured to determine said model parameters during the training phase from received training data (As per Claim interpretation of computation unit is configured to determine said model parameters above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 2, right column, paragraph below equation 2, lines 6-7 “The weights and biases are usually learned through a training set with known desired outputs” and page 4, left column, paragraph below equation 13, lines 3-5 “an approximation … is computed for mini-batches of training examples at each iteration” teaches determining the model parameters during a training phase in mini-batches extracted from the received training data),
said computation unit being configured to determine a plurality of sets of training data from said training data and expected … lattice points (As per Claim interpretation of computation unit being configured to determine a plurality of sets of training data above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 4, left column, paragraph 2, lines 1-2 “In the training phase, the designed DNN is trained with independent input vectors”, equation 12 and algorithm 1, and page 4, left column, paragraph below equation 13, lines 3-5 “mini-batches of training examples at each iteration” teaches a computation unit configured to determine sets of training data in partitioned mini batches from training data and expected lattice points q),
Mdkarimi teaches determining training data from expected lattice points, but does not teach the expected lattice points to be expected number of lattice points.
Deilami teaches:
number of lattice points (Deilami, paragraph [0032], lines 1-6 “to implement lattice searching inside a sphere…enumerate the lattice points inside a sphere.” teaches determining a number of lattice points).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mdkarimi to include determining a number of lattice points, based on the teachings of Deilami. The motivation for doing so would have been to improve the performance (Deilami, paragraph [0006], last two lines “this method can work in an iterative fashion to improve the performance").
Mdkarimi discloses:
each expected … lattice points being associated with a set of training data among said plurality of sets of training data (Mdkarimi, page 4, algorithm 1, line 2 teaches each q lattice point being associated with a set of training data),
said training phase comprising two or more processing iterations (Mdkarimi, page 4, left column, paragraph below equation 13, lines 3 and 7 “each iteration t… t = 1, 2, …, B” teaches training phase comprising two or more processing iterations),
at each processing iteration, the computation unit being configured to: process said deep neural network using a set of training data among said plurality of training data as input, which provides … intermediate … lattice points associated with said set of training data (As per Claim interpretation of computation unit being configured to determine a plurality of sets of training data above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 4, left column, paragraph below equation 13, lines 3-5 “mini-batches of training examples at each iteration” and page 4, left column, last paragraph, last 3 lines and right column, first paragraph, first 4 lines “the real and imaginary parts of the observation vectors during training…are stacked as in (12), and fed to the DNN…For each input training vector x(i), the corresponding desired radius vector r(i) is obtained...” teaches a computation unit configured to process said deep neural network using minibatches as a set of training data as input, which provides desired vector r(i) as intermediate lattice points),
determine a loss function from the expected … lattice points and the intermediate … lattice points associated with said set of training data (As per Claim interpretation of computation unit being configured to… determine a loss function from the expected number of lattice points above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 4, equation 14 and left column, last paragraph, last 3 lines and right column, first paragraph, first 4 lines “the real and imaginary parts of the observation vectors during training…are stacked as in (12), and fed to the DNN to minimize the MSE loss function in (14)…For each input training vector x(i), the corresponding desired radius vector r(i) is obtained...” teaches a computation unit configured to determine a MSE loss function from the expected lattice points and the intermediate lattice points associated with said set of training data),
determine updated model parameters by applying an optimization algorithm according to a minimization of said loss function (As per Claim interpretation of computation unit being configured to… determine updated model parameters above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 4, right column, first paragraph, lines 1-7 “minimize the MSE loss function in (14)... the parameter vector of the DNN is updated according to the input-output vector pairs … by employing the adaptive moment estimation stochastic optimization algorithm,” teaches a computation unit configured to determine updated model parameters by applying an optimization algorithm according to the minimization of said loss function).
Mdkarimi teaches the above embodiments using expected lattice points, more specifically the radiuses of q closest lattice points, but does not teach the expected lattice points to be expected number of lattice points.
Deilami teaches:
determining number of lattice points (Deilami, paragraph [0032], lines 1-6 “to implement lattice searching inside a sphere…enumerate the lattice points inside a sphere.” teaches determining a number of lattice points).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mdkarimi to include determining a number of lattice points, based on the teachings of Deilami. The motivation for doing so would have been to improve the performance (Deilami, paragraph [0006], last two lines “this method can work in an iterative fashion to improve the performance").
Regarding claim 7, Mdkarimi, in view of Deilami, in further view of Kim, discloses the communication system of claim 6. Mdkarimi further discloses:
wherein said optimization algorithm is chosen in a group comprising an Adadelta optimization algorithm, an Adagrad optimization algorithm, an adaptive moment estimation algorithm, a Nesterov accelerated gradient algorithm, a Nesterov-accelerated adaptive moment estimation algorithm, a RMSprop algorithm, stochastic gradient optimization algorithms, and adaptive learning rate optimization algorithms (Mdkarimi, page 4, right column, first paragraph, lines 4-7 “the parameter vector of the DNN is updated according to the input-output vector pairs … by employing the adaptive moment estimation stochastic optimization algorithm”).
Regarding claim 8, Mdkarimi, in view of Deilami, in further view of Kim, discloses the communication system of claim 6. Mdkarimi further discloses:
wherein said loss function is chosen in a group comprising a mean square error function and an exponential log likelihood function (Mdkarimi, page 4, left column, paragraph below equation 12, lines 1-3 “obtain the parameter vector of the DNN by minimizing the following mean-squared error (MSE) loss function”).
Regarding claim 9, Mdkarimi, in view of Deilami, in further view of Kim, discloses the communication system of claim 6 and the computation unit. Mdkarimi further discloses:
wherein the computation unit is configured to determine initial model parameters for a first processing iteration from a randomly generated set of values (As per Claim interpretation of the computation unit is configured to determine initial model parameters above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 4, right column, second paragraph, equation 15 and line 4 “θO … is random initial value” teaches the computation unit configured to determine initial model parameters from a randomly generated set of values).
Regarding claim 10, Mdkarimi, in view of Deilami, in further view of Kim, discloses the communication system of claim 6 and the computation unit. Mdkarimi further discloses:
wherein said computation unit is configured to determine said expected … lattice points from said radius value and lattice generator matrix by applying a list sphere decoding algorithm or a list Spherical-Bound Stack decoding algorithm (As per Claim interpretation of computation unit being configured to… determine updated model parameters above: Mdkarimi, page 2, left column, last paragraph, lines 2-3 “processing units” and page 3, left column, paragraph 3, lines 1-4 “Sphere decoding can speed up the process of finding the optimal solution by searching only the points of the skewed lattice that lie within a hypersphere of radius d centered at the vector y.” teaches the computation unit configured to determine expected lattice points from radius value and lattice generator matrix by applying a sphere decoding algorithm).
Mdkarimi teaches determining lattice points by applying a sphere decoding algorithm as stated above, but does NOT disclose determining lattice points to be determining number of lattice points.
Deilami teaches:
determining number of lattice points (Deilami, paragraph [0032], lines 1-6 “to implement lattice searching inside a sphere…enumerate the lattice points inside a sphere.” teaches predicting a number of lattice points).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mdkarimi to include determining a number of lattice points, based on the teachings of Deilami. The motivation for doing so would have been to improve the performance (Deilami, paragraph [0006], last two lines “this method can work in an iterative fashion to improve the performance").
Claims 11, 12 and 14 are substantially similar to claim 1, and thus are rejected on the same basis as claim 1.
Response to Arguments
Applicant's arguments filed 12/22/2025 have been fully considered with regards to the 35 U.S.C. 102/103 rejection, but they are not persuasive.
The applicant asserts on page 8 of the remarks “Additionally, Applicant directs the Examiner's attention to paragraphs [0097], [0098], and [0122] - [0140] of the printed publication consistent with the specification as filed. These paragraphs support compliance with the written description requirement. Accordingly, Applicant asserts that the claims are even more clearly definite and the claims even more clearly comply with the written description requirement and that the 35 USC 112 rejections have been overcome.” The examiner respectfully disagrees, as the paragraphs, specifically ¶[0097] of the printed publication, given do not provide adequate support for a “storage unit for storing” (see claim interpretation and 112 rejections above).
Applicant's arguments filed 12/22/2025 have been fully considered with regards to the 35 U.S.C. 101 rejection, and they are persuasive. The rejections are withdrawn.
Applicant's arguments filed 12/22/2025 have been fully considered with regards to the 35 U.S.C. 102/103 rejection, but they are not persuasive.
The applicant asserts on page 12 of the remarks “Therefore, the input data of the learning algorithm of MDKARIMI are not determined as a function of the given radius and of the elements of the channel matrix… MDKARIMI does not use or suggest using a ML algorithm to predict the number of lattice points that fall inside a sphere defined by a given radius (unique radius), from input data determined as a function of the given radius and elements of the channel matrix.”
First of all, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the channel matrix and input data determined as a function of the given radius and elements of the channel matrix) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Second, Mdkarimi discloses “the machine learning algorithm taking as input an input data vector comprising inputs determined from said radius value …” (Mdkarimi, Fig. 1, algorithm 1, and page 4, right column, first paragraph, lines 2-4 “For each input training vector x(i), the corresponding desired radius vector r(i) is obtained by employing SDIRS with a set of heuristic radiuses” and page 3, left column, final paragraph, lines 2-4 “q closest lattice points to vector y in the skewed lattice space is reconstructed via a DNN (as the DNN output) prior to sequential sphere decoding implementations” teaches applying a machine learning algorithm using the DNN in Fig. 1, where input data is derived from heuristic radius values and components of lattice generator matrix, and the DNN output is the predicted lattice points).
Finally, regarding predicting a number of lattice points, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In particular, Mdkarimi teaches determining predicted lattice points that fall inside said bounded region by applying a machine learning algorithm using a deep neural network associated with model parameters and an activation function, the model parameters being determined during a training phase, the machine learning algorithm taking as input an input data vector comprising inputs determined from said radius value and said components of lattice generator matrix and delivering as output said predicted lattice points (Fig. 1, equation 2, and page 2, right column, paragraph below equation 2, lines 1-3 “denotes the set of parameters … Al is the activation function”, and page 4, right column, first paragraph, lines 2-4 “For each input training vector x(i), the corresponding desired radius vector r(i) is obtained by employing SDIRS with a set of heuristic radiuses” and page 3, left column, final paragraph, lines 2-4 “q closest lattice points to vector y in the skewed lattice space is reconstructed via a DNN (as the DNN output) prior to sequential sphere decoding implementations”, and page 4, algorithm 1 teaches a processing unit configured to determine predicted lattice points by applying a machine learning algorithm using the DNN in Fig. 1 to input data derived from heuristic radius value and components of lattice generator matrix and delivering as output a predicted matrix). However, they do not explicitly disclose the prediction of lattice points to be predicting a number of lattice points. Deilami teaches predicting a number of lattice points (¶[0032], lines 1-6 “to implement lattice searching inside a sphere…enumerate the lattice points inside a sphere.” teaches predicting a number of lattice points). Mdkarimi and Deilami are analogous art because they are from the same field of endeavor, lattice point estimation and sphere decoding. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Mdkarimi to include predicting a number of lattice points, based on the teachings of Deilami. The motivation for doing so would have been to improve the performance (Deilami, paragraph [0006], last two lines “this method can work in an iterative fashion to improve the performance").
The applicant asserts on pages 13-14 of the remarks “More precisely, at least the following underlined features of claim 1 are clearly not disclosed by MDKARIMI:… the computation unit is configured to perform a QR decomposition of said lattice generator matrix, which provides an upper triangular matrix, said computation unit being configured to determine said input data by performing a multiplication operation between each component of said upper triangular matrix and the inverse of said radius value.”. and “Such features of claim 1 are also missing from KIM which refers to a sphere decoder, but fails to teach or suggest the use of a machine learning algorithm to determine a predicted number of lattice points that fall inside a bounded region, as claimed in claim 1,” The examiner respectfully disagrees, as the limitation incorporates the previously rejected claim 2, which was taught by Mdkarimi, in view of Deilami, in further view of Kim. Furthermore, Kim does not need to disclose a machine learning algorithm to determine … predicted … lattice points that fall inside a bounded region, as it is taught by Mdkarimi. See 103 rejection for claim 1 above for a detailed breakdown of how Mdkarimi, in view of Deilami, in further view of Kim teaches the QR decomposition as recited in claim 1. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references.
In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). The applicant is incorrect in stating that no motivation is given that would lead one of ordinary skill in the art to combine the references in a manner set forth in the Office Action. Both the office action mailed 06/26/2025 the current rejection provides sufficient motivation to combine for each of the references used with Mdkarimi.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure.
Loera et al. (“Details on Experiments (Counting and Estimating Lattice Points)”) teaches counting lattice points.
Wang et al. (“Deep Learning for Joint MIMO Detection and Channel Decoding”) teaches DNN and sphere decoding.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.Z.M./ Examiner, Art Unit 2141
/MATTHEW ELL/ Supervisory Patent Examiner, Art Unit 2141