Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Amendments
This action is in response to amendments filed September 22nd, 2025, in which Claims 1, 3, 5-8, 10, 13, and 14 are amended. Claims 15-17 are new. and Claims 1-17 are currently pending.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: at least one evaluation component configured to update … in Claim 1 and its dependents. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites a computer system comprising: at least one evaluation component (interpreted under 35 U.S.C. 112(f) as including at least a processor), thus an article of manufacture, one of the four statutory categories of patentable subject matter. However, Claim 1 also recites to update the weight factors of at least a part of the synapses of the at least one neural network including to update all weight factors of said at least one subset of synapses at the same time on the basis of correlated random components and to update the weight factors of a group of synapses not belong to said at least one set of synapses individually on the basis of uncorrelated random components which is a mental process of determining new weight factors for synapses. Thus, the claim recites the abstract idea of updating weight factors based on correlated and uncorrelated components.
The claim does not recite any additional elements that integrate the abstract idea into a practical application because the additional elements consist of:
at least one neural network implemented on the computer system and configured to determine as output at least one result value from at least one input value provided as input, wherein there is a defined plurality of weight factors each weight factor being assigned to a synapse of an artificial neuron of the neural network and wherein at least one subset of the synapses of the at least one neural network is defined, which merely specifies the particular technological environment in which the abstract idea takes place (i.e., what is the information acted upon in the mental process step to update), which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application
at least one evaluation component configured to perform the updating, which is the implementation of the abstract idea on a computer, which by MPEP 2106.05(f)(2) cannot integrate the abstract idea into a practical application
that the updating is performed when an input signal is applied to one synapse belonging to said at least one subset of synapses, which merely specifies the particular technological environment in which the abstract idea takes place (i.e., when the step to update happens), which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application.
Thus, the claim is directed to the abstract idea of updating weight factors based on correlated and uncorrelated components.
Finally, the claim does not include any additional elements, alone or in combination, which could provide an inventive concept or significantly more than the abstract idea itself, because neither the specification of a particular technological environment (MPEP 2106.05(h)) nor the implementation on generic computer components (MPEP 2106.05(f)(2)) are significantly more, and there is no nexus between the additional elements which could provide so in combination. Therefore, the claim is ineligible.
Claims 2-4, 7, and 16, and dependent upon Claim 1, only further recite details of the neural network about which the abstract idea of updating weight factors is performed, and thus merely specifies the particular technological environment in which the abstract idea takes place, which by MPEP 2106.05(h) can neither integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claims 5, 6, and 15, dependent upon Claim 1, recite further mental process steps within the abstract idea of updating the weight factors (to change the group assignment of the at least one defined subset of synapses between two computational steps and to create the correlated random components out of uncorrelated random components by using a predetermined operation or by creating weighted sums of the uncorrelated random components), but no new additional elements to the abstract idea, and thus no additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself.
Claim 8 recites a method, thus a process, one of the four statutory categories of patentable subject matter. However, Claim 8 further recites steps of determining at least one result value from at least one input value, defining at least on set of synapse of the at least one neural network, updating during a computational step all weight factors of at least one subset of synapses at the same time on the basis of correlated random components, and updating the weight factors of a group of synapses not belong to the at least one subset of synapses individually on a basis of uncorrelated random components, each of which are mental processes capable of being performed in the human mind (e.g. determining a result from an input value and determining new weight factors for synapses). Thus, the claim recites the abstract ideas of determining a result from an input value, and updating weight factors based on correlated and uncorrelated components.
The claim does not recite any additional elements that integrate the abstract idea into a practical application because the additional elements consist of:
a computer system on which to perform the determining and updating, which is the implementation of the abstract idea on a computer, which by MPEP 2106.05(f)(2) cannot integrate the abstract idea into a practical application
using the implemented at least one neural network, wherein there is defined a plurality of weight factors each weight factor being assign to a synapse of an artificial neuron of the at least on neural network to perform the determining, which is using a computer or other machinery as a tool to perform abstract idea steps, which by MPEP 2106.05(f)(2) cannot integrate the abstract idea into a practical application
that the updating is performed when an input signal is applied to [one of the entangled synapses]/[a synapse belonging to said group of synapses] which merely specifies the particular technological environment in which the abstract idea takes place (i.e., when the step to update happens), which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application.
Thus, the claim is directed to the abstract idea of updating weight factors based on correlated and uncorrelated components.
Finally, the claim does not include any additional elements, alone or in combination, which could provide an inventive concept or significantly more than the abstract idea itself, because neither the specification of a particular technological environment (MPEP 2106.05(h)) nor the implementation on generic computer components (MPEP 2106.05(f)(2)) are significantly more, and there is no nexus between the additional elements which could provide so in combination. Therefore, the claim is ineligible.
Claims 9, 10, 13, and 17, dependent upon Claim 8, only recite further mental processing steps (Claim 9: determined based on basis of input signals by means of the weight factors, functions, etc.; Claim 10: assignment is changed; Claim 13: correlated random components created out of uncorrelated random components by an operation; Claim 17: creating weighted sums of the uncorrelated random components), but no additional elements, thus no additional element which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself.
Claims 11 and 12, dependent upon Claim 8, merely specify the particular technological environment in which the abstract idea takes place (i.e., weight factors were randomly initialized; updating is done by computational units of the computer system simultaneously), which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.
Claim 14 recites a non-transitory computer readable medium having stored thereon a computer program to carry out the method of Claim 8, and is thus rejected for reasons set forth in the rejection of Claim 8, as performing an abstract idea using generic computer components cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself (by MPEP 2106.05(f)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4-7; 8-14, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ng, “CS294A Lecture Notes: Sparse Autoencoder,” in view of El-Khamy, US PG Pub 2019/0096038.
Regarding Claim 1, Ng teaches a computer system (Ng, pg. 16, “the programming assignment” indicates that the autoencoding neural network learning system is implemented on a computer) comprising: at least one neural network implemented on the computer system and configured to determine as output at least one result value from at least one input value provided as input (Ng, pg. 13, see figure of autoencoder), wherein there is defined a plurality of weight factors each weight factor being assigned to a synapse of an artificial neuron of the neural network and wherein at least one subset of synapses of the at least one neural network is defined (Ng, pg. 13, each edge is a synapse with an associated weight factor, see Eqs on pg. 4, each layer is a defined subset) and at least one evaluation component configured to update the weight factors of at least a part of the synapses of the at least one neural network, the at least one evaluation component being configured to update all weight factors of said at least one set of synapses at the same time during a computational step … when an input signal is applied to one synapse belonging to said at least one subset of synapses (Ng, pg. 8, Step 3 of backpropagation, where backpropagation updates each layer/set of synapses at the same time/step when the backpropagation signal is computed/applied to the neurons, see pg. 10, Step 3) wherein the at least one evaluation component is further configured to update the weight factors of a group of synapses not belonging to said at least one subset of synapses (i.e., any other layer) individually … when an input signal is applied to a synapse belonging to said group of synapses (Ng, pg. 10, Step 3, all the weights, including weights of different layers, are updated according to the backpropagation-computed gradients).
Ng is silent regarding whether the updating is performed on the basis of correlated random components and on uncorrelated random components. However, Ng clearly teaches that all the weights are updated on the basis of 1) the input signal and 2) the activations of intermediate layers (Ng. pg. 8, Step 1, “perform a feedforward pass, computing the activations for layers”). El-Khamy teaches that the input of an autoencoder can include uncorrelated random components (El-Khamy, [0028], “inputting a noisy version of the denoised image into a noisy data sparse denoising autoencoder” where for the noisy version, [0068], “in which
y
i
j
, the
(
i
,
j
)
t
h
generate image pixel, depends only on
x
i
j
” that is, the different noises on each different input pixel are independent and uncorrelated, see Equation 1). It would have been obvious to one of ordinary skill in the art before the effective filing date to use an autoencoder such as that described in Ng to denoise images, as does El-Khamy. The motivation to do so is “when a digital image is taken under relative low light conditions, for example, at dusk or nighttime, noise is often present … The image noise is undesirable” (El-Khamy, [0003]) & El-Khamy solves this problem using sparse autoencoders (El-Khamy, [0028], [0036]). Having the input image include uncorrelated random components causes all weight factors to be updated on the basis of uncorrelated random components and on the basis of correlated random components since, after processing through the first layer of the neural network, the independent noise of the input becomes correlated summation and activation values (Ng, pg. 4, Eqs. (2-5) and backpropagation is performed on the basis of the activation values.
PNG
media_image1.png
183
745
media_image1.png
Greyscale
Specifically, the inputs to the neural network of
x
1
,
x
1
, and
x
1
are uncorrelated random components in the combination of Ng/El-Khamy, and the different highlighted weighted sums are correlated random components; further the activations
a
1
(
2
)
,
a
2
(
2
)
, and
a
3
(
2
)
are also correlated random components.
Regarding Claim 4, the Ng/El-Khamy combination of Claim 1 teaches the computer system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Ng further teaches wherein for each artificial neuron of the at least one neural network an output value is determinable on basis of input signals applied to synapses of the artificial neuron by means of the weight factors which are assigned to the synapses, and integration function of the neuron and [an activation function] of the artificial neuron, which output value forms an input signal for at least one synapse of a different artificial neuron or forms a component of the result value to be outputted by the at least one neural network, wherein the at least one result value can be computed by the at least one neural network on basis of the at least one input value applied to a defined group of synapses by progressive computation of the output values of the artificial neurons (Ng, pg. 4, Eq(s) 2-5) & pg. 13, figure of autoencoder, including weight factors where matrix multiplication integrates all of the input values into a single vector for that neuron).
Ng teaches a sigmoid activation function (Ng, pg. 2, last paragraph) rather than a threshold function. However, El-Khamy uses a threshold function for their activation function (El-Khamy, [0054], “the denoising network …[may include] ReLU” where an ReLU has a threshold of zero for activating the output). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, if using an autoencoder for denoising images as El-Khamy does, to use the same activation function in that autoencoder, as El-Khamy does (Ng, pg. 2, “we will choose
f
(
∙
)
” indicates that the choice of activation function is application-dependent).
Regarding Claim 5, the Ng/El-Khamy combination of Claim 1 teaches the computer system of Claim 1 (and thus the rejection of Claim 1 is incorporated). Ng further teaches wherein the computer system is configured to change a group assignment of the at least one defined subset of synapses between two computational steps (Ng, pg. 9, Step 3, backpropagation works on each subset/layer sequentially, Step 3 changes the currently assigned subset/layer to be working on, i.e. the current layer assignment to l).
Regarding Claim 6, the Ng/El-Khamy combination of Claim 1 teaches the computer system of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination has already been shown to teach wherein the computer system is configured to create the correlated random components out of uncorrelated random components by using a predetermined operation (See the rejection of Claim 1, where after processing through the first layer of the neural network, the independent noise of the input becomes correlated summation and activation values (Ng, pg. 4, Eqs. (2-5)).
Regarding Claim 7, the Ng/El-Khamy combination of Claim 1 teaches the computer system of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination has not yet been shown to teach, but El-Khamy teaches, wherein at least two neural networks which are working in parallel at a given time are implemented on the computer system (El-Khamy, Abstract, “An image denoising neural network training architecture includes an image denoising neural network and a clean data neural network”) and at least some of the artificial neurons of a given neural network are crosslinked with artificial neurons of a segment of another neural network by having axons of one neural network reach across neural networks to send signals to synapses of the other neural network (El-Khamy, Abstract, “An image denoising neural network training architecture includes an image denoising neural network and a clean data neural network, and the image denoising neural network and clean data neural network share information between each other” see Fig. 1).
Regarding Claim 8, Ng teaches a method for operating a computer system on which at least one neural network is implemented (Ng, pg. 16, “the programming assignment” indicates that the autoencoding neural network learning system is implemented on a computer) wherein the at least one neural network determines as output at least one result value from at least one input value provided as input (Ng, pg. 13, see figure of autoencoder), the method comprising: determining as output at least one result value from at least one input value provided as input using the implemented at least one neural network (Ng, pg. 13, see figure of autoencoder) wherein there is defined a plurality of weight factors each weight factor being assigned to a synapse of an artificial neuron of the neural network (Ng, pg. 13, each edge is a synapse with an associated weight factor, see Eqs on pg. 4); defining at least one subset of synapses of the at least one neural network (Ng, pg. 13, each layer is a defined subset of synapses); updating during a computational step all weight factors of said at least one subset of synapses at the same time … when an input signal is applied to one synapse belonging to the at least one subset of synapses (Ng, pg. 8, Step 3 of backpropagation, where backpropagation updates each layer/set of synapses at the same time/step when the backpropagation signal is computed/applied to the neurons, see pg. 10, Step 3); updating the weight factors of a group of synapses not belonging to said at least one subset of synapses (i.e., any other layer) individually … when an input signal is applied to a synapse belonging to said group of synapses (Ng, pg. 10, Step 3, all the weights, including weights of different layers, are updated according to the backpropagation-computed gradients).
Ng is silent regarding whether the updating is performed on the basis of correlated random components and on uncorrelated random components. However, Ng clearly teaches that all the weights are updated on the basis of 1) the input signal and 2) the activations of intermediate layers (Ng. pg. 8, Step 1, “perform a feedforward pass, computing the activations for layers”). El-Khamy teaches that the input of an autoencoder can include uncorrelated random components (El-Khamy, [0028], “inputting a noisy version of the denoised image into a noisy data sparse denoising autoencoder” where for the noisy version, [0068], “in which
y
i
j
, the
(
i
,
j
)
t
h
generate image pixel, depends only on
x
i
j
” that is, the different noises on each different input pixel are independent and uncorrelated, see Equation 1). It would have been obvious to one of ordinary skill in the art before the effective filing date to use an autoencoder such as that described in Ng to denoise images, as does El-Khamy. The motivation to do so is “when a digital image is taken under relative low light conditions, for example, at dusk or nighttime, noise is often present … The image noise is undesirable” (El-Khamy, [0003]) & El-Khamy solves this problem using sparse autoencoders (El-Khamy, [0028], [0036]). Having the input image include uncorrelated random components causes all weight factors to be updated on the basis of uncorrelated random components and on the basis of correlated random components since, after processing through the first layer of the neural network, the independent noise of the input become correlated summation and activation values (Ng, pg. 4, Eqs. (2-5) and backpropagation is performed on the basis of the activation values.
PNG
media_image1.png
183
745
media_image1.png
Greyscale
Specifically, the inputs to the neural network of
x
1
,
x
1
, and
x
1
are uncorrelated random components in the combination of Ng/El-Khamy, and the different highlighted weighted sums are correlated random components; further the activations
a
1
(
2
)
,
a
2
(
2
)
, and
a
3
(
2
)
are also correlated random components.
Regarding Claim 9, the Ng-El-Khamy combination of Claim 8 teaches the method of Claim 8 (and thus the rejection of Claim 8 is incorporated). Ng further teaches wherein for each artificial neuron of the at least one neural network an output value is determinable on basis of input signals applied to synapses of the artificial neuron by means of the weight factors which are assigned to the synapses, and integration function of the neuron and [an activation function] of the artificial neuron, which output value forms an input signal for at least one synapse of a different artificial neuron or forms a component of the result value to be outputted by the at least one neural network, wherein the at least one result value can be computed by the at least one neural network on basis of the at least one input value applied to a defined group of synapses by progressive computation of the output values of the artificial neurons (Ng, pg. 4, Eq(s) 2-5) & pg. 13, figure of autoencoder, including weight factors where matrix multiplication integrates all of the input values into a single vector for that neuron).
Ng teaches a sigmoid activation function (Ng, pg. 2, last paragraph) rather than a threshold function. However, El-Khamy uses a threshold function for their activation function (El-Khamy, [0054], “the denoising network …[may include] ReLU” where an ReLU has a threshold of zero for activating the output). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, if using an autoencoder for denoising images as El-Khamy does, to use the same activation function in that autoencoder, as El-Khamy does (Ng, pg. 2, “we will choose
f
(
∙
)
” indicates that the choice of activation function is application-dependent).
Regarding Claim 10, the Ng/El-Khamy combination of Claim 8 teaches the method of Claim 8 (and thus the rejection of Claim 8 is incorporated). Ng further teaches wherein an assignment of the at least one defined subset of synapses is changed at least once between two computational steps (Ng, pg. 9, Step 3, backpropagation works on each subset/layer sequentially, Step 3 changes the currently assigned subset/layer to be working on, i.e. the current layer assignment to l).
Regarding Claim 11, the Ng/El-Khamy combination of Claim 8 teaches the method of Claim 8 (and thus the rejection of Claim 8 is incorporated). Claim 11 is a contingent limitation (depending on all weight factors which were assigned a same random component during a randomized initialization) and as no weight factors were assigned a same random component during a random initialization in the Ng/El-Khamy combination, the limitation is trivially met. See MPEP 2111.04(II).
Regarding Claim 12, the Ng/El-Khamy combination of Claim 8 teaches the method of Claim 8 (and thus the rejection of Claim 8 is incorporated). Ng further teaches wherein the updating of the subsets of synapses is done by a plurality of computational units of the system concurrently (Ng, pg. 10, Step 3, the weights are updated in vector notation, i.e. concurrently, using multiple memory locations, i.e. by a plurality of computational units).
Regarding Claim 13, the Ng/El-Khamy combination of Claim 8 teaches the method of Claim 8 (and thus the rejection of Claim 8 is incorporated). The combination has already been shown to teach wherein the computer system is configured to create the correlated random components out of uncorrelated random components by using a predetermined operation (See the rejection of Claim 1, where after processing through the first layer of the neural network, the independent noise of the input becomes correlated summation and activation values (Ng, pg. 4, Eqs. (2-5)).
Claim 14 recites a non-transitory computer readable recording medium having stored thereon a program to perform the method of Claim 8. As Ng has been shown to program a computer to perform their method (Ng, pg. 16, “the programming assignment” indicates that the autoencoding neural network learning system is implemented on a computer), Claim 14 is rejected for reasons set forth in the rejection of Claim 8.
Regarding Claim 15, the Ng/El-Khamy combination of Claim 1 teaches the computer system of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination has already been shown to teach wherein the computer system is configured to create correlated random components out of the uncorrelated random components by creating weighted sums of the uncorrelated random components (See the rejection of Claim 1, where after processing through the first layer of the neural network, the independent noise of the input becomes correlated summation and activation values (Ng, pg. 4, Eqs. (2-5)). Specifically, in the Ng/El-Khamy combination, in Ng, Eqs. (2-4):
x
1
,
x
1
, and
x
1
are uncorrelated random components corresponding to the inputs of the autoencoder, as they are generated independently in El-Khamy, [0068], Equation 1. Ng, Eqs. (2-4) thus discloses weighted sums of the uncorrelated random components:
PNG
media_image1.png
183
745
media_image1.png
Greyscale
where these three indicated weighted sums are correlated random components created of weighted sums of the uncorrelated random components.
Regarding Claim 16, the Ng/El-Khamy combination of Claim 1 teaches the computer system of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination has not yet been shown to teach, but El-Khamy teaches, wherein at least two neural networks which are working in parallel at a given time are implemented on the computer system (El-Khamy, Abstract, “An image denoising neural network training architecture includes an image denoising neural network and a clean data neural network”) and at least some of the artificial neurons of a given neural network are crosslinked with artificial neurons of a segment of another neural network by having axons of one neural network reach across neural networks to send signals to synapses of the other neural network (El-Khamy, Abstract, “An image denoising neural network training architecture includes an image denoising neural network and a clean data neural network, and the image denoising neural network and clean data neural network share information between each other” see Fig. 1) wherein for each of the segments of another neural network which is to be linked to, there is provided a separate dendrite in an artificial neuron of the neural network with as many synapses as there are artificial neurons in the segment of the other network (El-Khamy, Fig. 1, where the two sharing networks have the same architecture, see [0056], and thus have the same dendrite/synapse structure as each other, i.e. as many synapses as there are artificial neurons in the segment of the other neural network, see [0098] where the neuron outputs from the clean network are provided to the inputs of the noisy network).
Regarding Claim 17, the Ng/El-Khamy combination of Claim 8 teaches the method of Claim 8 (and thus the rejection of Claim 8 is incorporated). The combination has already been shown to teach wherein the correlated random components are created out of the uncorrelated random components by creating weighted sums of the uncorrelated random components (See the rejection of Claim 1, where after processing through the first layer of the neural network, the independent noise of the input becomes correlated summation and activation values (Ng, pg. 4, Eqs. (2-5)). Specifically, in the Ng/El-Khamy combination, in Ng, Eqs. (2-4):
x
1
,
x
1
, and
x
1
are uncorrelated random components corresponding to the inputs of the autoencoder, as they are generated independently in El-Khamy, [0068], Equation 1. Ng, Eqs. (2-4) thus discloses weighted sums of the uncorrelated random components:
PNG
media_image1.png
183
745
media_image1.png
Greyscale
where these three indicated weighted sums are correlated random components created of weighted sums of the uncorrelated random components.
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Ng, in view of El-Khamy, and further in view of Dean et al., “Large Scale Distributed Deep Networks.”
Regarding Claim 2, the Ng/El-Khamy combination of Claim 1 teaches the computer system of Claim 1 (and thus the rejection of Claim 1 is incorporated). The Ng/El-Khamy combination does not teach, but Dean teaches wherein the computer system comprises a plurality of computational units which are operated in parallel and computations units of the plurality of computational units are assigned to defined groups of artificial neurons of the at least one neural network (Dean, pg. 3, Fig. 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the DistBelief system of Dean (i.e. parallel learning of neural networks on multiple machines, dividing the neural network’s neurons up between the machines) to train the denoising autoencoder architecture of the Ng/El-Khamy invention. The motivation to do so is “to facilitate the training of very large networks” (Dean, pg. 3, 1st paragraph), such as the denoising network of the Ng/El-Khamy combination (El-Khamy, Fig. 1, for example).
Regarding Claim 3, the Ng/El-Khamy/Dean combination of Claim 2 teaches the computer system of Claim 2 (and thus the rejection of Claim 2 is incorporated). The combination has already been shown to teach wherein at least two different computational units of the plurality of computational units are assigned to at least two different subsets of said at least one subset of synapses (Dean, pg. 3, Fig. 1).
Response to Arguments
Applicant’s arguments filed September 22nd, 2025 have been fully considered, but are not fully persuasive.
Applicant’s arguments regarding the various 35 U.S.C. 112(b) indefiniteness rejections of the previous office action have been fully considered, and are persuasive. The rejections have been withdrawn.
Applicant’s arguments regarding the 35 U.S.C. 101 rejections of the previous office action have been fully considered, but are unpersuasive.
Applicant first states that “the presently claim invention requires that the weight factors of entangled synapses (i.e. synapses belonging to said at least one subset of synapses) are updated on the basis of interdependent (i.e. correlated) random data and that the weight factors of unentangled synapses (i.e. synapses belong to said group of synapses not belonging to said at least one subset of synapses are updated on the basis of mutually independent (i.e. uncorrelated) random data.” However, applicant next implies that this claimed feature “substantially enhances the trainability of the neural network by reducing the training period and amount of training data” which fails to follow from the recited claim language. For example, as in the disclosed prior art, all synapses are updated based on a basis of both uncorrelated random factors and correlated random factors. This procedure clearly falls within the scope of the claims, and does not result in any increase in trainability of a neural network from standard practices. Improvements in technology must be reflected in the claim language, which the currently claimed invention fails to do.
Applicant continues to discuss the detailed workings of the disclosed invention in an attempt to demonstrate significantly more than the abstract idea itself. However, as none of these features are required by the claim language, this discussion does not reflect on the patentability of the claimed invention.
The actual claimed limitations, as noted in the subject matter eligibility rejection, only require updating on the basis of the correlated and uncorrelated components, a step which, on its own, does not provide the improvements in training efficiency or accuracy asserted by the applicant.
Applicant’s arguments regarding the 35 U.S.C. 103 rejections of the previous office action have been fully considered, but are unpersuasive.
Applicant asserts that the weight factors of the Ng/El-Khamy combination are not updated based upon correlated and uncorrelated random factors. Applicant argues that “although an autoencoder has a stochastic input, the process to obtain a denoised output is not stochastic.” Without arguing the merits of this statement, it is noted that whether “the process to ob