DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-13, 21-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. With respect to Claim 1, 21,22 the limitations obtaining a context temporal sequence of a plurality of context radar fields characterizing a real-world location, each context radar field characterizing the weather in the real-world location at a corresponding preceding time point; sampling a set of one or more latent inputs by sampling values from a specified distribution; and for each sampled latent input, processing the context temporal sequence of radar fields and the sampled latent input using a generative neural network that has been configured through training to process the temporal sequence of radar fields to generate as output a predicted temporal sequence comprising a plurality of predicted radar fields, each predicted radar field in the predicted temporal sequence characterizing the predicted weather in the real-world location at a corresponding future time point. This limitation is directed to an abstract idea and would fall within the “Mathematical Concept” or “Mental Process” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application.In particular, the claim recites the additional element – A method performed by one or more computers (One or more non-transitory computer-readable storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations comprising:) (A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations comprising) These limitations are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As such Examiner does NOT view that the claims -Improve the functioning of a computer, or to any other technology or technical field
-Apply the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b)
-Effect a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c)
-Apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo Moreover Examiner views the claims to be merely generally linking the use of the judicial exception to maintenance component. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Examiner further notes that such additional elements are viewed to be well known routine and conventional as evidenced by Leiononen (Stochastic Super-Resolution for Downscaling Time-Evolving Atmospheric Fields with a Generative Adversarial Network) van den Oord (US 2018/0025257 A1) Duboue (US2018/0374089 A1) Kumar (US 2019/0303703 A1)- Considering the claim as a whole, one of ordinary skill in the art would not know the practical application of the present invention since the claims do not apply or use the judicial exception in some meaningful way. As currently claimed, Examiner views that the additional elements do not apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, because the claims fails to recite clearly how the judicial exception is applied in a manner that does not monopolize the exception. Dependent claims 2-13, -23-27 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea, as detailed below: there is no additional element(s) in the dependent claims that adds a meaningful limitation to the abstract idea to make the claim significantly more than the judicial exception (abstract idea). Claims 2-13, -23-27 further limit the abstract idea with an abstract idea and thus the claims are still directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 11 recites
PNG
media_image1.png
258
938
media_image1.png
Greyscale
However variables are reciting without defining them and are therefore indefinite.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-9, 12, 13, 21- 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leiononen (Stochastic Super-Resolution for Downscaling Time-Evolving Atmospheric Fields with a Generative Adversarial Network) in view of van den Oord (US 2018/0025257 A1).
With respect to Claim 1 Leinonen teaches A method performed by one or more computers, the method comprising: obtaining a context temporal sequence of a plurality of context radar fields characterizing a real-world location, each context radar field characterizing the weather in the real-world location at a corresponding preceding time point (See Page 2 Left Column Para[0002] We use this GAN to stochastically downscale time series of images from two atmospheric remote-sensing datasets: precipitation measured by the MeteoSwiss groundbased weather radar network, and cloud optical depth imaged by the Geostationary Operational Environmental Satellite 16 (GOES-16)); sampling a set of inputs by sampling values from a specified distribution (See Page 1 Right Column Para[0002] More recently, generative adversarial networks (GANs) have been used to train super-resolution CNNs [8], [9]. GANs are a general technique for generating artificial samples [10] from the training distribution. See Page 2 Left Column Para[0004] A GAN consists of two neural networks: the generator (G) and the discriminator (D). The discriminator is trained to determine whether or not its input is an example from the training dataset, while the generator is simultaneously trained to produce artificial samples that the discriminator classifies as real.) ; and for each sampled input, processing the context temporal sequence of radar fields and the sampled latent input using a generative neural network that has been configured through training to process the temporal sequence of radar fields to generate as output a predicted temporal sequence comprising a plurality of predicted radar fields (See Fig 1 and Page 2 Para[0003] In contrast to most GANs, our networks also employ recurrent layers in the form of convolutional gated recurrent units (ConvGRUs), variants of the gated recurrent unit (GRU) [29]. These recurrent layers permit the network to learn the temporal evolution of the fields, while the convolutional and residual blocks learn the spatial structure. And Page 4 Para[0002] Training)
, each predicted radar field in the predicted temporal sequence characterizing the predicted weather in the real-world location at a corresponding future time point. (See Abstract It is also able to generate time series much longer than the training sequences, as demonstrated by applying the generator to a three-month dataset of the precipitation radar data.) However Leinonen is silent to the language of one or more latent inputs latent inputs Nevertheless van den Oord teaches one or more latent inputs (See Para[0064] the neural network input can include a high-level description of the desired content of the generated image that is represented as a latent vector) latent inputs (See Para[0064]) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Leinonen and use one or more latent inputs such as that of van den Oord. One of ordinary skill would have been motivated to modify Leinonen because a latent input serves as input to a model, capturing underlying features of high-dimensional data like images or text, allowing models to generate new content or understand complex patterns that aren't directly measurable, acting as hidden variables. With respect to Claim 2 Leinonen teaches The method of claim 1, wherein: each context radar field comprises a respective measured precipitation rate for each of a plurality of grid cells that each correspond to a respective region of the real-world location at a first resolution, wherein the respective measured precipitation rate for each of the grid cells represents a precipitation rate that was measured at the corresponding region at the corresponding preceding time point (See Abstract section IV Fig. 2); and each predicted radar field comprises a respective predicted precipitation rate for each of the plurality of grid cells that each correspond to a respective region of the real-world location at the first resolution, wherein the respective predicted precipitation rate for each of the grid cells represents a precipitation rate that is predicted to be measured at the corresponding region at the corresponding future time point. (See Abstract section IV Fig. 2) With respect to Claim 3 Leinonen teaches The method of claim 1, wherein processing the context temporal sequence of radar fields and the sampled latent input using the generative neural network comprises: (See Section II and Fig, 1) processing the context temporal sequence using a context conditioning convolutional stack to generate a respective context feature representation at each of a plurality of spatial resolutions; (See Section II and Fig, 1) processing the latent input using a latent conditioning convolutional stack to generate a latent feature representation; and (See Section II and Fig, 1) generating the predicted temporal sequence from the context feature representations and the latent feature representation. (See Section II and Fig, 1) With respect to Claim 4 Leinonen teaches The method of claim 3, wherein generating the predicted temporal sequence from the context feature representations and the latent feature representation comprises: for each spatial resolution, initializing a hidden state of a corresponding convolutional recurrent neural network (convRNN) in a sequence of convRNNs that operates at the spatial resolution to be the respective context feature representation at the spatial resolution; and (See Section II and Fig, 1) generating the first predicted radar field at the first future time point in the predicted temporal sequence, comprising: (See Section II and Fig, 1) processing the latent feature representation through the sequence of convRNNs in accordance with the respective hidden states of each of the convRNNs to (i) update the respective hidden states of each of the convRNNs and (ii) generate an output feature representation for the first future time point; and (See Section II and Fig, 1) processing the output feature representation for the first future time point using an output convolutional stack to generate the predicted radar field at the first future time point. (See Section II and Fig, 1) With respect to Claim 5 Leinonen teaches The method of claim 4, wherein generating the predicted temporal sequence from the context feature representations and the latent feature representation comprises: for each future time point in the temporal sequence after the first future time point: (See Section II and Fig, 1) processing the latent feature representation through the sequence of convRNNs in accordance with respective hidden states of each of the convRNNs as of the preceding future time point in the temporal sequence to (i) update the respective hidden states of each of the convRNNs and (ii) generate an output feature representation for the future time point; and (See Section II and Fig, 1) processing the output feature representation for the future time point using the output convolutional stack to generate the predicted radar field at the future time point. (See Section II and Fig, 1) With respect to Claim 6 Leinonen teaches The method of claim 1, wherein the generative neural network has been trained jointly with one or more discriminator neural networks on training data that includes sequences of observed radar fields to optimize a generative adversarial networks (GAN) objective. (See Section II and Fig, 1) With respect to Claim 7 Leinonen teaches The method of claim 6, wherein the one or more discriminator neural networks include a temporal discriminator neural network that distinguishes sequences of observed radar fields from the training data from sequences of predicted radar fields generated by the generative neural network. (See Section II and Fig, 1) With respect to Claim 8 Leinonen teaches The method of claim 6, wherein the one or more discriminator neural networks include a spatial discriminator neural network that distinguishes individual observed radar fields from the training data from individual predicted radar fields generated by the generative neural network. (See Section II and Fig, 1) With respect to Claim 9 Leinonen teaches The method of claim 6, wherein the generator neural network and the discriminator neural networks are trained on observed radar fields that have a first dimensionality, wherein after the training the context radar fields received as input by the generator neural network and the predicted radar fields generated by the generator neural network have a second dimensionality, and wherein the first dimensionality is smaller than the second dimensionality. (See Section II and Fig, 1) With respect to Claim 12 Leinonen teaches The method of claim 1, wherein sampling each latent input comprises: sampling each value in the latent input independently from the specified distribution. (See Section II and Fig, 1) With respect to Claim 13 Leinonen teaches The method of claim 1, wherein the set of latent inputs includes a plurality of latent inputs. (See Section II and Fig, 1) With respect to Claim 21 Leinonen teaches One or more non-transitory computer-readable storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations comprising: obtaining a context temporal sequence of a plurality of context radar fields characterizing a real-world location, each context radar field characterizing the weather in the real-world location at a corresponding preceding time point; (See Page 2 Left Column Para[0002] We use this GAN to stochastically downscale time series of images from two atmospheric remote-sensing datasets: precipitation measured by the MeteoSwiss groundbased weather radar network, and cloud optical depth imaged by the Geostationary Operational Environmental Satellite 16 (GOES-16)); sampling a set of one or more inputs by sampling values from a specified distribution; and (See Page 1 Right Column Para[0002] More recently, generative adversarial networks (GANs) have been used to train super-resolution CNNs [8], [9]. GANs are a general technique for generating artificial samples [10] from the training distribution. See Page 2 Left Column Para[0004] A GAN consists of two neural networks: the generator (G) and the discriminator (D). The discriminator is trained to determine whether or not its input is an example from the training dataset, while the generator is simultaneously trained to produce artificial samples that the discriminator classifies as real.) for each sampled latent input, processing the context temporal sequence of radar fields and the sampled input using a generative neural network that has been configured through training to process the temporal sequence of radar fields to generate as output a predicted temporal sequence comprising a plurality of predicted radar fields, each predicted radar field in the predicted temporal sequence characterizing the predicted weather in the real-world location at a corresponding future time point. (See Fig 1 and Page 2 Para[0003] In contrast to most GANs, our networks also employ recurrent layers in the form of convolutional gated recurrent units (ConvGRUs), variants of the gated recurrent unit (GRU) [29]. These recurrent layers permit the network to learn the temporal evolution of the fields, while the convolutional and residual blocks learn the spatial structure. And Page 4 Para[0002] Training) However Leinonen is silent to the language of one or more latent inputs latent inputs Nevertheless van den Oord teaches one or more latent inputs (See Para[0064] the neural network input can include a high-level description of the desired content of the generated image that is represented as a latent vector) latent inputs (See Para[0064]) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Leinonen and use one or more latent inputs such as that of van den Oord. One of ordinary skill would have been motivated to modify Leinonen because a latent input serves as input to a model, capturing underlying features of high-dimensional data like images or text, allowing models to generate new content or understand complex patterns that aren't directly measurable, acting as hidden variables. With respect to Claim 22 Leinonen teaches A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations comprising: obtaining a context temporal sequence of a plurality of context radar fields characterizing a real-world location, each context radar field characterizing the weather in the real-world location at a corresponding preceding time point; (See Page 2 Left Column Para[0002] We use this GAN to stochastically downscale time series of images from two atmospheric remote-sensing datasets: precipitation measured by the MeteoSwiss groundbased weather radar network, and cloud optical depth imaged by the Geostationary Operational Environmental Satellite 16 (GOES-16)); sampling a set of one or more inputs by sampling values from a specified distribution; and (See Page 1 Right Column Para[0002] More recently, generative adversarial networks (GANs) have been used to train super-resolution CNNs [8], [9]. GANs are a general technique for generating artificial samples [10] from the training distribution. See Page 2 Left Column Para[0004] A GAN consists of two neural networks: the generator (G) and the discriminator (D). The discriminator is trained to determine whether or not its input is an example from the training dataset, while the generator is simultaneously trained to produce artificial samples that the discriminator classifies as real.) for each sampled latent input, processing the context temporal sequence of radar fields and the sampled input using a generative neural network that has been configured through training to process the temporal sequence of radar fields to generate as output a predicted temporal sequence comprising a plurality of predicted radar fields, each predicted radar field in the predicted temporal sequence characterizing the predicted weather in the real-world location at a corresponding future time point. (See Fig 1 and Page 2 Para[0003] In contrast to most GANs, our networks also employ recurrent layers in the form of convolutional gated recurrent units (ConvGRUs), variants of the gated recurrent unit (GRU) [29]. These recurrent layers permit the network to learn the temporal evolution of the fields, while the convolutional and residual blocks learn the spatial structure. And Page 4 Para[0002] Training) However Leinonen is silent to the language of one or more latent inputs latent inputs Nevertheless van den Oord teaches one or more latent inputs (See Para[0064] the neural network input can include a high-level description of the desired content of the generated image that is represented as a latent vector) latent inputs (See Para[0064]) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Leinonen and use one or more latent inputs such as that of van den Oord. One of ordinary skill would have been motivated to modify Leinonen because a latent input serves as input to a model, capturing underlying features of high-dimensional data like images or text, allowing models to generate new content or understand complex patterns that aren't directly measurable, acting as hidden variables. With respect to Claim 23 Leinonen teaches The system of claim 22, wherein: each context radar field comprises a respective measured precipitation rate for each of a plurality of grid cells that each correspond to a respective region of the real-world location at a first resolution, wherein the respective measured precipitation rate for each of the grid cells represents a precipitation rate that was measured at the corresponding region at the corresponding preceding time point; and (See Abstract section IV Fig. 2) each predicted radar field comprises a respective predicted precipitation rate for each of the plurality of grid cells that each correspond to a respective region of the real-world location at the first resolution, wherein the respective predicted precipitation rate for each of the grid cells represents a precipitation rate that is predicted to be measured at the corresponding region at the corresponding future time point. (See Abstract section IV Fig. 2) With respect to Claim 24 Leinonen teaches The system of claim 22, wherein processing the context temporal sequence of radar fields and the sampled latent input using the generative neural network comprises: processing the context temporal sequence using a context conditioning convolutional stack to generate a respective context feature representation at each of a plurality of spatial resolutions; (See Section II and Fig, 1) processing the latent input using a latent conditioning convolutional stack to generate a latent feature representation; and (See Section II and Fig, 1) generating the predicted temporal sequence from the context feature representations and the latent feature representation. (See Section II and Fig, 1) With respect to Claim 25 Leinonen teaches The system of claim 24, wherein generating the predicted temporal sequence from the context feature representations and the latent feature representation comprises: for each spatial resolution, initializing a hidden state of a corresponding convolutional recurrent neural network (convRNN) in a sequence of convRNNs that operates at the spatial resolution to be the respective context feature representation at the spatial resolution; and (See Section II and Fig, 1) generating the first predicted radar field at the first future time point in the predicted temporal sequence, comprising: (See Section II and Fig, 1) processing the latent feature representation through the sequence of convRNNs in accordance with the respective hidden states of each of the convRNNs to (i) update the respective hidden states of each of the convRNNs and (ii) generate an output feature representation for the first future time point; and (See Section II and Fig, 1) processing the output feature representation for the first future time point using an output convolutional stack to generate the predicted radar field at the first future time point. (See Section II and Fig, 1) With respect to Claim 26 Leinonen teaches The system of claim 25, wherein generating the predicted temporal sequence from the context feature representations and the latent feature representation comprises: for each future time point in the temporal sequence after the first future time point: (See Section II and Fig, 1) processing the latent feature representation through the sequence of convRNNs in accordance with respective hidden states of each of the convRNNs as of the preceding future time point in the temporal sequence to (i) update the respective hidden states of each of the convRNNs and (ii) generate an output feature representation for the future time point; and (See Section II and Fig, 1) processing the output feature representation for the future time point using the output convolutional stack to generate the predicted radar field at the future time point. (See Section II and Fig, 1) With respect to Claim 27 Leinonen teaches The system of claim 22, wherein the generative neural network has been trained jointly with one or more discriminator neural networks on training data that includes sequences of observed radar fields to optimize a generative adversarial networks (GAN) objective. (See Section II and Fig, 1)
Claim(s) 10, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leiononen (Stochastic Super-Resolution for Downscaling Time-Evolving Atmospheric Fields with a Generative Adversarial Network) in view of van den Oord (US 2018/0025257 A1) as applied to Claim 9 above and further in view of Duboue (US2018/0374089 A1).
With respect to Claim 10 Leinonen is silent to the language of The method of claim 9, wherein during the training, the sampled latent inputs have a smaller dimensionality than the sampled latent inputs after training. Nevertheless Duboue teaches wherein during the training, the sampled latent inputs have a smaller dimensionality than the sampled latent inputs after training. (See Para[0034] The output layer has the same number of nodes as the input layer, while the hidden layer, which supplies the encoding algorithm after a number of training iterations, has a smaller dimension than the input and output layers.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Leinonen and have a smaller dimensionality such as that of Duboue. One of ordinary skill would have been motivated to modify in order to improve efficiency. With respect to Claim 11 Leinonen is silent to the language of The method of claim 10, wherein the first dimensionality is h.sub.1×w.sub.1×1 and the dimensionality of the sampled latent inputs during training is h.sub.1/a×w.sub.1/a×b, the second dimensionality is h.sub.2×w2×1 and the dimensionality of the sampled latent inputs during training is h.sub.1a×w2/a×b, h.sub.2 is larger than h.sub.1, and w.sub.2 is larger than w.sub.1. Nevertheless Duboue teaches wherein the first dimensionality is h.sub.1×w.sub.1×1 and the dimensionality of the sampled latent inputs during training is h.sub.1/a×w.sub.1/a×b, the second dimensionality is h.sub.2×w2×1 and the dimensionality of the sampled latent inputs during training is h.sub.1a×w2/a×b, h.sub.2 is larger than h.sub.1, and w.sub.2 is larger than w.sub.1. (See Para[0034] The output layer has the same number of nodes as the input layer, while the hidden layer, which supplies the encoding algorithm after a number of training iterations, has a smaller dimension than the input and output layers.) Examiner notes it would have been obvious to one having ordinary skill in the art at the time the invention was made to have such ranges since it has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art. In re Aller, 105 USPQ 233. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Leinonen and have the dimensionality such as that of Duboue. One of ordinary skill would have been motivated to modify in order to improve efficiency.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kumar (US 2019/0303703 A1) teaches a method of identifying land cover receiving multi-spectral values for a plurality of locations at a plurality of times. The determined latent representation is then used to predict a land cover for the selected location at the time.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOSHIHISA ISHIZUKA whose telephone number is (571)270-7050. The examiner can normally be reached M-F 11:00-7:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine Rastovski can be reached at (571) 270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
YOSHIHISA . ISHIZUKA
Examiner
Art Unit 2863
/YOSHIHISA ISHIZUKA/ Primary Examiner, Art Unit 2863