DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-14 are pending examination.
Information Disclosure Statement
The information disclosure statements submitted by Applicant on 9/12/2022, 4/13/2023, 10/31/2023, and 8/28/2024 have been considered.
Claim Rejections - 35 USC § 112
Claim 5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “lower probability” in claim 5 is a relative term which renders the claim indefinite. The term “lower” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “lower” as recited in claim 5 is a subjective term of degree that renders the claim indefinite as the term is not defined in the claim nor in Applicant’s specification. Therefore, the claim is rejected under 35 U.S.C. 112(b).
The term “certain ratio” in claim 5 is a relative term which renders the claim indefinite. The term “certain” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “certain” as recited in the claim is a subjective term that renders the claim indefinite as the term is not defined in the claim nor in Applicant’s specification. Therefore, the claim is rejected under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea and/or mathematical concepts) without significantly more.
Regarding claim 1,
Step 1: Claim 1 is directed towards a non-transitory computer-readable storage medium.
Step 2A, Prong 1: Claim 1 recites the following limitations:
converting, into a first probability distribution, a probability distribution of a latent variable that is generated by the trained variational autoencoder according to the input based on a magnitude of a standard deviation output from the encoder (i.e., this limitation recites mathematical computations. See concepts described by Applicant in the Specification in Paragraphs [0056]-[0060] in order to convert a probability distribution of a latent variable into a first probability distribution);
converting the first probability distribution into a second probability distribution based on an output error of the decoder regarding the input data (i.e., this limitation recites mathematical computations. See concepts described by Applicant in the Specification in Paragraphs [0056]-[0060] in order to convert the first probability distribution above into a second probability distribution based on an error calculation);
Hence, the claim recites an abstract idea.
Step 2A, Prong 2: The claim recites the following additional elements: “a non-transitory computer-readable storage medium”, “a latent variable that is generated by the trained variational autoencoder according to the input based on a magnitude of a standard deviation output from the encoder”, and “a second probability based on an output error of the decoder regarding the input data”. These additional elements have been recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception (i.e., the abstract ideas) using generic computer components (i.e., a variational autoencoder with a decoder and an encoder). (see MPEP 2106.05(f)) Furthermore, the claim recites the additional elements of “inputting an input data into a trained variational autoencoder that includes an encoder and a decoder” and “outputting the second probability distribution as an estimated value of a probability distribution of the input data.”. These “inputting…” and “outputting…” steps have been understood to be insignificant extra-solution activity consisting of mere data transmission over a network. (see MPEP 2106.05(g)). Hence, the claim does not recite additional elements that integrate the judicial exception into a practical application.
Step 2B: Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application the additional elements of “a non-transitory computer-readable storage medium”, “a latent variable that is generated by the trained variational autoencoder according to the input based on a magnitude of a standard deviation output from the encoder” and “a second probability based on an output error of the decoder regarding the input data” amount to no more than mere instructions to apply the exception (i.e., the abstract idea) using generic computer components (i.e., a variational autoencoder with a decoder and an encoder). (see MPEP 2106.05(f)). Furthermore, the “inputting” and “outputting” steps were considered to be insignificant extra-solution activity. Hence, these steps must be re-evaluated under Step 2B to see if they are more than what is considered well-understood, routine, and conventional activity in the field. The court decisions cited in MPEP 2106.05(d)(II) have concluded that mere data transmission over a network is a well-understood, routine, and conventional activity in the field. (see MPEP 2106.05(d)(II) i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information)). Hence, the claim does not recite additional elements that amount to significantly more than the judicial exception nor does it recite an inventive concept, and is rejected.
Regarding claim 2,
Step 2A, Prong 1: Claim 2 recites an abstract idea as inherited from claim 1 above. Claim 2 recites the additional limitation:
wherein the converting the probability distribution of the latent variable into the first probability distribution includes converting the probability distribution of the latent variable into the first probability distribution according to conversion processing from the latent variable into principal component coordinates, based on the magnitude of the standard deviation (i.e., this limitation recites mathematical computations. See concepts described by Applicant in the Specification in Paragraphs [0046]-[0060] in order to convert the probability distribution of the latent variable into the first probability distribution according to conversion processing from the latent variable into principal component coordinates, based on the magnitude of the standard deviation).
Hence, the claim further recites an abstract idea.
Step 2A, Prong 2: Claim 2 recites a non-transitory computer-readable storage medium and a variational autoencoder with a decoder and an encoder to perform the limitation above amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence, the claim does not recite additional elements that integrate the judicial exception into a practical application.
Step 2B: Claim 2 does not recite further additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed in the rejection of claim 1 above, the additional elements of “a non-transitory computer readable storage medium” and a variational autoencoder with a decoder and an encoder to perform the limitation above amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence, the claim does not recite additional elements that amount to significantly more than the judicial exception nor does it recite an inventive concept, and is rejected.
Regarding claim 3,
Step 2A, Prong 1: Claim 3 recites an abstract idea as inherited from claims 1 and 2 above. Claim 2 recites the additional limitation:
wherein the conversion processing includes:
converting the latent variable into the principal component coordinates by using the standard deviation, the mean, and the change rate (i.e., this limitation recites mathematical computations. See concepts described by Applicant in the Specification in Paragraphs [0046]-[0060] in order to convert the latent variable into principal components.)
Hence, the claim further recites an abstract idea.
Step 2A, Prong 2: Claim 3 recites the additional elements of “acquiring the standard deviation and a mean that are distribution parameters of the probability distribution of the latent variable from the encoder”, and “acquiring a change rate of a scale between the principal component coordinates and the latent variable, by using the magnitude of the standard deviation” [output from the encoder]. These additional elements have been understood as insignificant extra-solution activity consisting of mere data transmission. (see MPEP 2106.05(g)). Hence, the claim does not recite further additional elements that integrate the judicial exception (i.e., the abstract idea) into a practical application.
Step 2B: Claim 3 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of “acquiring the standard deviation and a mean that are distribution parameters of the probability distribution of the latent variable from the encoder”, and “acquiring a change rate of a scale between the principal component coordinates and the latent variable, by using the magnitude of the standard deviation” [output from the encoder] have been understood as insignificant extra-solution activity consisting of mere data transmission. Hence, these additional elements are re-evaluated under Step 2B to see if they are more than what is considered by the courts as well-understood, routine, and conventional activity in the field. The court decisions cited in MPEP 2106.05(d)(II) have concluded that mere data transmission over a network is a well-understood, routine, and conventional activity in the field. (see MPEP 2106.05(d)(II) i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information)). Hence, the claim does not recite additional elements that amount to significantly more than the judicial exception nor does it recite an inventive concept, and is rejected.
Regarding claim 4,
Step 2A, Prong 1: Claim 4 recites an abstract idea as inherited from claims 1, 2, and 3 above. Claim 4 recites the additional limitation:
wherein converting the first probability distribution into the second probability distribution includes:
setting a probability distribution other than a principal component in the first probability distribution as a constant (i.e., this limitation recites mathematical computations. See concepts described by Applicant in the Specification in Paragraphs [0046]-[0060] in order to set a probability distribution other than a principal component in the first probability distribution as a constant);
converting the first probability distribution into the second probability distribution by using the output error according to a normal distribution to which the scale of the input data is set (i.e., this limitation recites mathematical computations. See concepts described by Applicant in the Specification in Paragraphs [0046]-[0060] in order to convert the first probability distribution into the second probability distribution by using the calculated error according to a normal distribution to which the scale of the input data is set)
Hence, the claim further recites an abstract idea.
Step 2A, Prong 2: Claim 4 does not recite any additional elements that integrate the judicial exception (i.e., the abstract ideas) into a practical application.
Step 2B: Claim 4 does not include additional elements that are sufficient to amount to significantly more than the judicial exception nor does it recite an inventive concept. Therefore, the claim is rejected.
Regarding claim 5,
Step 2A, Prong 1: Claim 5 recites an abstract idea as inherited from claim 1 above. Claim 5 further recites the limitation:
the process further comprising detecting data, with a lower probability, that occupies a certain ratio as anomaly data based on the second probability distribution (i.e., a person can either mentally or with the aid of pen and paper make observations and judgements about data to determine that such data has a lower probability of being anomaly data based on the second probability distribution and that said data occupies a certain ratio as anomaly data (e.g., wherein certain ratio assuming that the noise mixed into the data is known – according to Paragraph [0047] of Applicant’s specification))
Hence, the claim further recites an abstract idea.
Step 2A, Prong 2: Claim 5 does not recite any additional elements that integrate the judicial exception (i.e., the abstract idea) into a practical application.
Step 2B: Claim 5 does not include additional elements that are sufficient to amount to significantly more than the judicial exception nor does it recite an inventive concept. Therefore, the claim is rejected.
Regarding claim 6,
Step 2A, Prong 1: Claim 6 recites an abstract idea as inherited from claim 1. Claim 6 further recites the limitation:
the process further comprising detecting data with a probability equal to or less than a threshold as anomaly data based on the second probability distribution (i.e., a person can either mentally or with the aid of pen and paper make observations and judgements about data to determine that such data has a probability equal to or less than a threshold as anomaly data based on the second probability distribution).
Hence, the claim further recites an abstract idea.
Step 2A, Prong 2: Claim 6 does not recite any additional elements that integrate the judicial exception (i.e., the abstract idea) into a practical application.
Step 2B: Claim 6 does not include additional elements that are sufficient to amount to significantly more than the judicial exception nor does it recite an inventive concept. Therefore, the claim is rejected.
Regarding claim 7,
Claim 7 (directed towards a method) recites the same and/or analogous limitations as claim 1 above. Therefore the claim is rejected under the same rationale stated for claim 1.
Regarding claim 8,
Claim 8 (directed towards a method) recites the same and/or analogous limitations as claim 2 above. Therefore the claim is rejected under the same rationale stated for claim 2.
Regarding claim 9,
Claim 9 (directed towards a method) recites the same and/or analogous limitations as claim 3 above. Therefore the claim is rejected under the same rationale stated for claim 3.
Regarding claim 10,
Claim 10 (directed towards a method) recites the same and/or analogous limitations as claim 4 above. Therefore the claim is rejected under the same rationale stated for claim 4.
Regarding claim 11,
Step 1: Claim 11 is directed towards an apparatus.
Step 2A, Prong 1: Claim 11 recites the same and/or analogous limitations as claim 1 above. Therefore, claim 11 is rejected under the same rationale as claim 1.
Step 2A, Prong 2: Claim 11 further recites the additional elements of “one or more memories” and “one or more processors” configured to execute the same limitations set forth in claim 1. These additional elements are recited at a high level of generality such that it amounts to no more than mere instructions to apply the judicial exception (i.e., the abstract ideas) using generic computer components (i.e., memories and processors). Therefore, the claim does not recite additional elements that integrate the judicial exception into a practical application.
Step 2B: The same rationale for Step 2B stated for claim 1 is incorporated here by reference. As stated above in regards to the integration of the judicial exception into a practical application, the additional elements of “one or more memories” and “one or more processors” configured to execute the same limitations set forth in claim 1 are recited at a high level of generality such that it amounts to no more than mere instructions to apply the judicial exception (i.e., the abstract ideas) using generic computer components (i.e., memories and processors). Hence, claim 11 lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is therefore rejected.
Regarding claim 12,
Claim 12 (directed towards an apparatus) recites the same and/or analogous limitations as claim 2 above. Therefore the claim is rejected under the same rationale stated for claim 2.
Regarding claim 13,
Claim 13 (directed towards an apparatus) recites the same and/or analogous limitations as claim 3 above. Therefore the claim is rejected under the same rationale stated for claim 3.
Regarding claim 14,
Claim 14 (directed towards an apparatus) recites the same and/or analogous limitations as claim 4 above. Therefore the claim is rejected under the same rationale stated for claim 4.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 5, 6, 7, and 11 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Sadrieh et al. (U.S. Patent No. 11294756, filed Sep. 19, 2019 and published Apr. 5, 2022)
Regarding claim 1, Sadrieh teaches a non-transitory computer-readable storage medium storing an estimation program that causes at least one computer to execute a process (Sadrieh, Col. 6 liners 8-26 teaches with reference to FIG. 7, the computing environment 700 includes one or more processing units 710, 715 and memory 720, 725. In FIG. 7, this basic configuration 730 is included within a dashed line. The processing units 710, 715 execute computer-executable instructions. … The tangible memory 720, 725 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, accessible by the processing unit(s). The memory 720, 725 stores software 780 implementing one or more innovations described herein, in the form of computer-executable instructions suitable for execution by the processing unit(s); Sadrieh Col. 6 lines 27-29 further teaching a computing system may have additional features. For example, the computing environment 700 includes storage 740, and Col. 6, lines 38-44 further teaching the tangible storage 740 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed within the computing environment 700. The storage 740 stores instructions for the software 780 implementing one or more innovations described herein.), the process comprising:
inputting an input data into a trained variational autoencoder that includes an encoder and a decoder (Sadrieh, Fig. 3 teaches a variational autoencoder (VAE) comprising input data (310), an encoder (320), and a decoder (350); Col. 3, lines 4-17 teach the VAE is a neural network that is trained to copy an input vector x∈R.sup.N to its output {circumflex over (x)}∈R.sup.N, wherein R is a real number and N relates to the size of the input vector… The VAE includes two parts, an encoder, with function ƒ(x)=z∈R.sup.M, and the decoder g(z)=M relates to a size of an output vector produced by the encoder.);
converting, into a first probability distribution, a probability distribution of a latent variable that is generated by the trained variational autoencoder according to the input based on a magnitude of a standard deviation output from the encoder (Sadrieh, Figure 3 teaches converting into a first parametrized latent distribution (340) generated by the encoder of the variational autoencoder according to the input (310); Sadrieh, Col. 3, lines 26-63 further teaches [regarding Figure 3] the probabilistic encoder (which includes encoder 320) parametrizes the distribution of z, using the distribution estimation engine, which can be separate from the encoder or integrated into the encoder, by estimating descriptive statistics of the distribution, such as their mean μ.sub.z, variance σ.sup.2.sub.z [Note: variance σ.sup.2.sub.z is the magnitude of the standard deviation σ])
converting the first probability distribution into a second probability distribution based on an output error of the decoder regarding the input data (Sadrieh, Col. 3, lines 26-63 further teaches Figure 3 teaches the parameterized latent distribution 340 [i.e., the first probability distribution] is fed into a decoder 350. Both the encoder 320 and decoder 350 of the VAE are implemented using a L-layer LSTM configuration. The probabilistic decoder 350 parametrizes the distribution of {circumflex over (x)}∈Z.sup.N by estimating the mean μ.sub.{circumflex over (x)} and variance σ.sup.2.sub.x. An output {circumflex over (x)}.sub.0-{circumflex over (x)}.sub.T, shown at 360, represents the reconstructed input vector 310, but with reduced noise. The output vector 360 is then input into a reconstruction of the probability 370. [i.e., the reconstruction probability (370) understood as “a second probability distribution based on an output error of the decoder regarding the input data”]); and
outputting the second probability distribution as an estimated value of a probability distribution of the input data (Sadrieh, Col. 3, lines 26-63 teaches the probabilistic decoder 350 parametrizes the distribution of {circumflex over (x)}∈Z.sup.N by estimating the mean μ.sub.{circumflex over (x)} and variance σ.sup.2.sub.x. An output {circumflex over (x)}.sub.0-{circumflex over (x)}.sub.T, shown at 360, represents the reconstructed input vector 310, but with reduced noise. The output vector 360 is then input into a reconstruction of the probability 370.; Sadrieh, Figure 3, 360 and 370 teaching the claimed outputting.).
Regarding claim 5, Sadrieh teaches all of the limitations of claim 1, and Sadrieh further teaches the process further comprising detecting data, with a lower probability, that occupies a certain ratio as anomaly data based on the second probability distribution (Sadrieh, Figure 5 teaches “calculate a reconstruction probability using the compressed time series of input data” (560) and “calculate an anomaly score” (570) [Note: here, the calculated anomaly score is based on the calculated reconstruction probability distribution, this probability distribution being understood as the second probability distribution as stated above in claim 1]; Sadrieh. Col. 2 lines 44-52 teaches the output of the decoder 140 is input into an anomaly score module 160, which generates an anomaly score indicating whether there are anomalies in the time series data 108. Typical anomaly scores range from 0-1, wherein 1 indicates a very high probability that an anomaly exists. The anomaly score determination 160 can be a classifier, which produces a single score every pre-determined time period, or an aggregator that generates a continuous output, which is compared to a threshold. [Note: an anomaly score closer to zero would indicated a “lower” probability that an anomaly exists, the 0-1 range reading on occupying a certain ratio as anomaly data]).
Regarding claim 6, Sadrieh teaches all of the limitations of claim 1, and Sadrieh further teaches the process further comprising detecting data with a probability equal to or less than a threshold as anomaly data based on the second probability distribution (Sadrieh, Figure 5 teaches “calculate a reconstruction probability using the compressed time series of input data” (560) and “calculate an anomaly score” (570) [Note: here, the calculated anomaly score is based on the calculated reconstruction probability distribution, this probability distribution being understood as the second probability distribution as stated above in claim 1]; Sadrieh. Col. 2 lines 44-52 teaches the output of the decoder 140 is input into an anomaly score module 160, which generates an anomaly score indicating whether there are anomalies in the time series data 108. Typical anomaly scores range from 0-1, wherein 1 indicates a very high probability that an anomaly exists. The anomaly score determination 160 can be a classifier, which produces a single score every pre-determined time period, or an aggregator that generates a continuous output, which is compared to a threshold.; Sadrieh, Col. 5, lines 55-62 further teaches in process block 640, anomaly detection can then be performed on the resultant output of the decoder (i.e., the second probability distribution). For example, in FIG. 3, a RIF 380 can be used to generate an anomaly score. A mean and variance of the decoder output 360 can be used as an input to the RIF. The anomaly scores can be analyzed to determine whether they exceed a threshold and alerts can be generated for any detected anomalies. [Note: the anomaly scores here are generated based on the second probability distributions outputted by the decoder and these anomaly scores are then compared to a threshold to determine if these probabilities are equal or less than the threshold].).
Regarding claim 7,
Claim 7 is directed towards a method and recites the same and/or analogous limitations as claim 1. Therefore, claim 7 is rejected based on the same rationale as set forth in claim 1.
Regarding claim 11, Sadrieh teaches an information processing apparatus comprising: one or more memories; and one or more processors coupled to the one or more memories and the one or more processors (Col. 6 liners 8-26 teaches with reference to FIG. 7, the computing environment 700 includes one or more processing units 710, 715 and memory 720, 725. In FIG. 7, this basic configuration 730 is included within a dashed line. The processing units 710, 715 execute computer-executable instructions. … The tangible memory 720, 725 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, accessible by the processing unit(s). The memory 720, 725 stores software 780 implementing one or more innovations described herein, in the form of computer-executable instructions suitable for execution by the processing unit(s)) configured to:
input an input data into a trained variational autoencoder that includes an encoder and a decoder (Sadrieh, Fig. 3 teaches a variational autoencoder (VAE) comprising input data (310), an encoder (320), and a decoder (350); Col. 3, lines 4-17 teach the VAE is a neural network that is trained to copy an input vector x∈R.sup.N to its output {circumflex over (x)}∈R.sup.N, wherein R is a real number and N relates to the size of the input vector… The VAE includes two parts, an encoder, with function ƒ(x)=z∈R.sup.M, and the decoder g(z)=M relates to a size of an output vector produced by the encoder.),
convert, into a first probability distribution, a probability distribution of a latent variable that is generated by the trained variational autoencoder according to the input based on a magnitude of a standard deviation output from the encoder (Sadrieh, Figure 3 teaches converting into a first parametrized latent distribution (340) generated by the encoder of the variational autoencoder according to the input (310); Sadrieh, Col. 3, lines 26-63 further teaches [regarding Figure 3] the probabilistic encoder (which includes encoder 320) parametrizes the distribution of z, using the distribution estimation engine, which can be separate from the encoder or integrated into the encoder, by estimating descriptive statistics of the distribution, such as their mean μ.sub.z, variance σ.sup.2.sub.z [Note: variance σ.sup.2.sub.z is the magnitude of the standard deviation σ]),
convert the first probability distribution into a second probability distribution based on an output error of the decoder regarding the input data (Sadrieh, Col. 3, lines 26-63 further teaches Figure 3 teaches the parameterized latent distribution 340 [i.e., the first probability distribution] is fed into a decoder 350. Both the encoder 320 and decoder 350 of the VAE are implemented using a L-layer LSTM configuration. The probabilistic decoder 350 parametrizes the distribution of {circumflex over (x)}∈Z.sup.N by estimating the mean μ.sub.{circumflex over (x)} and variance σ.sup.2.sub.x. An output {circumflex over (x)}.sub.0-{circumflex over (x)}.sub.T, shown at 360, represents the reconstructed input vector 310, but with reduced noise. The output vector 360 is then input into a reconstruction of the probability 370. [i.e., the reconstruction probability (370) understood as “a second probability distribution based on an output error of the decoder regarding the input data”]), and
output the second probability distribution as an estimated value of a probability distribution of the input data (Sadrieh, Col. 3, lines 26-63 teaches the probabilistic decoder 350 parametrizes the distribution of {circumflex over (x)}∈Z.sup.N by estimating the mean μ.sub.{circumflex over (x)} and variance σ.sup.2.sub.x. An output {circumflex over (x)}.sub.0-{circumflex over (x)}.sub.T, shown at 360, represents the reconstructed input vector 310, but with reduced noise. The output vector 360 is then input into a reconstruction of the probability 370.; Sadrieh, Figure 3, 360 and 370 teaching the claimed outputting.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 2, 8, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Sadrieh et al. , as applied to claim 1, in view of Moore et al. (U.S. Patent No. 10402726, filed May 3, 2018 and published Sep. 3, 2019)
Regarding claim 2, Sadrieh teaches all of the limitations of claim 1, however Sadrieh does not distinctly disclose wherein the converting the probability distribution of the latent variable into the first probability distribution includes converting the probability distribution of the latent variable into the first probability distribution according to conversion processing from the latent variable into principal component coordinates, based on the magnitude of the standard deviation.
Nevertheless, Moore teaches wherein the converting the probability distribution of the latent variable into the first probability distribution includes converting the probability distribution of the latent variable into the first probability distribution according to conversion processing from the latent variable into principal component coordinates, based on the magnitude of the standard deviation (Moore, Col. 14, lines 48-59 teaches the principal component analyzer 152 is configured to receive the selected feature set 122 and to identify one or more principal components that are principally responsible for a threshold amount of variance in output data 154. The output data 154 may be outputs of simulations based on holding the target feature 106 at the target value 108, as described with reference to FIGS. 1A-1C. To illustrate, the principal component analyzer 152 may perform principal component analysis on the selected feature set 122 to generate a list of components that is ordered by contribution of the component to the overall variance of the output data 154.; Moore, Col. 15, lines 61-67 and Col. 16, lines 1-21 teaches in a particular implementation, a system may include both the multiple neural networks 124 (and remaining components of FIG. 1C) and the principal component analyzer 152. Such a system may selectively provide the selected feature set 122 to either or both of the multiple neural networks 124 or to the principal component analyzer 152 based on one or more factors. In some cases, both the principal component analysis and variational autoencoders may be used and the feature value distributions presented via the GUI may be based on aggregating, averaging, or otherwise combining output from both the principal component analysis and the variational autoencoders.; Figure 1C teaches Multiple Neural Networs (VAEs) (124))
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have combined the variational autoencoder, as taught by Sadrieh, with the principal component analyzer, as taught by Moore, given that the principal component analyzer may generate simulations that constrain the target feature more quickly than the multiple neural networks for smaller input data sets, thereby requiring less processing resources for smaller input datasets. (Moore, Col. 16, lines 1-5)
Regarding claim 8,
Claim 8 is directed towards a method and recited the same and/or analogous limitations as claim 2. Therefore, claim 8 is rejected based on the same rationale and motivation as set forth in claim 2.
Regarding claim 12,
Claim 12 is directed towards a method and recited the same and/or analogous limitations as claim 2. Therefore, claim 12 is rejected based on the same rationale and motivation as set forth in claim 2.
Conclusion
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kato et al., “Rate-Distortion Optimization Guided Autoencoder for Isometric Embedding in Euclidean Latent Space, (March, 3, 2020) – Figure 3 and description paragraph reading on claims 1, 7, and 11.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEATRIZ RAMIREZ BRAVO whose telephone number is 571-272-2156. The examiner can normally be reached Mon. - Fri. 7:30a.m.-5:00p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, USMAAN SAEED can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.R.B./Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146