DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Status Claim 1 (filed on 09/12/2023) was Original . In preliminary Amendment, filed on 09/13/2023: Claim 1 is Cancelled ; and Claims 2-21 are New. Claims 2-21 are pending and are e xamined. Double Patenting The non - statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non - statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg , 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman , 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi , 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum , 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel , 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington , 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non - statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA/25, or PTO/AIA/26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer . Claims 2, 12 and 21 (with their dependent claim; i.e., 3-11 and 12-20) are rejected on the ground of non - statutory double patenting a s being unpatentable over claims 1, 9 and 17 of U.S. Patent No. 11790209 B2 . Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are broader versions of the patent (which is parent case) . See claim-comparison table below, Instant Application USP 11790209 B2 2 . A computer implemented method for generating a final output image, comprising: repeatedly updating a neural network output at each of a plurality of time steps to generate a final neural network output, the updating comprising, for each of the time steps: generating a decoder input for a decoder neural network for the time step, wherein the decoder neural network is configured to, for each of the time steps, receive the decoder input for the time step, and process a decoder hidden state vector for a preceding time step and the decoder input to generate a decoder hidden state vector for the time step; processing the decoder input for the time step using the decoder neural network to generate the decoder hidden state vector for the time step; generating a neural network output update for the time step from the decoder hidden state vector for the time step; and combining the neural network output update for the time step with a current neural network output to generate an updated neural network output; and generating the final output image from the final neural network output. 1 . A computer implemented method for generating a final output image, comprising: repeatedly updating a neural network output at each of a plurality of time steps to generate a final neural network output, the updating comprising, for each of the time steps: generating a decoder input for a decoder neural network for the time step, wherein the decoder neural network is a recurrent neural network that is configured to, for each of the time steps, receive the decoder input for the time step, and process a decoder hidden state vector for a preceding time step and the decoder input to generate a decoder hidden state vector for the time step; processing the decoder input for the time step using the decoder neural network to generate the decoder hidden state vector for the time step; generating a neural network output update for the time step from the decoder hidden state vector for the time step; and combining the neural network output update for the time step with a current neural network output to generate an updated neural network output; and generating the final output image from the final neural network output. 12 . A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations comprising: repeatedly updating a neural network output at each of a plurality of time steps to generate a final neural network output, the updating comprising, for each of the time steps: generating a decoder input for a decoder neural network for the time step, wherein the decoder neural network is configured to, for each of the time steps, receive the decoder input for the time step, and process a decoder hidden state vector for a preceding time step and the decoder input to generate a decoder hidden state vector for the time step; processing the decoder input for the time step using the decoder neural network to generate the decoder hidden state vector for the time step; generating a neural network output update for the time step from the decoder hidden state vector for the time step; and combining the neural network output update for the time step with a current neural network output to generate an updated neural network output; and generating a final output image from the final neural network output. 9 . A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations comprising: repeatedly updating a neural network output at each of a plurality of time steps to generate a final neural network output, the updating comprising, for each of the time steps: generating a decoder input for a decoder neural network for the time step, wherein the decoder neural network is a recurrent neural network that is configured to, for each of the time steps, receive the decoder input for the time step, and process a decoder hidden state vector for a preceding time step and the decoder input to generate a decoder hidden state vector for the time step; processing the decoder input for the time step using the decoder neural network to generate the decoder hidden state vector for the time step; generating a neural network output update for the time step from the decoder hidden state vector for the time step; and combining the neural network output update for the time step with a current neural network output to generate an updated neural network output; and generating a final output image from the final neural network output. 21 . One or more non-transitory computer-readable storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations comprising: repeatedly updating a neural network output at each of a plurality of time steps to generate a final neural network output, the updating comprising, for each of the time steps: generating a decoder input for a decoder neural network for the time step, wherein the decoder neural network is configured to, for each of the time steps, receive the decoder input for the time step, and process a decoder hidden state vector for a preceding time step and the decoder input to generate a decoder hidden state vector for the time step; processing the decoder input for the time step using the decoder neural network to generate the decoder hidden state vector for the time step; generating a neural network output update for the time step from the decoder hidden state vector for the time step; and combining the neural network output update for the time step with a current neural network output to generate an updated neural network output; and generating a final output image from the final neural network output. 17 . One or more non-transitory computer-readable storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations comprising: repeatedly updating a neural network output at each of a plurality of time steps to generate a final neural network output, the updating comprising, for each of the time steps: generating a decoder input for a decoder neural network for the time step, wherein the decoder neural network is a recurrent neural network that is configured to, for each of the time steps, receive the decoder input for the time step, and process a decoder hidden state vector for a preceding time step and the decoder input to generate a decoder hidden state vector for the time step; processing the decoder input for the time step using the decoder neural network to generate the decoder hidden state vector for the time step; generating a neural network output update for the time step from the decoder hidden state vector for the time step; and combining the neural network output update for the time step with a current neural network output to generate an updated neural network output; and generating a final output image from the final neural network output. Allowable Subject Matter Claims 2-21 will be allowed when the above non - statutory double patenting is overcome; i.e., terminal disclaimer(s) is filed. The following is a statement of reasons for the indication of allowable subject matter: The claims are slightly broader versions of the patent. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (See PTO—892). For example , US 11080587 and 11790209, which are directed to Recurrent Neural Networks For Data Item Generation . Con tact Informati on Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT AMARE F TABOR whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571) 270-3155 . The examiner can normally be reached Mon.—Fri.: 8:00 AM to 5:00 PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT ALI SHAYANFAR can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 270-1050 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMARE F TABOR/ Primary Examiner, Art Unit 2434