DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
In paragraph 0198 and 0199 bitstream 510 is attributed to both a first bitstream and a second bitstream
In paragraph 0198 and 0199 bitstream 520 is attributed to both a first bitstream and a second bitstream.
Appropriate correction is required.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claims 7 and 25 are rejected under 35 U.S.C. 101 as claiming the same invention as that of claims 1 and 19, respectively, of prior U.S. Patent No. US 11924445. This is a statutory double patenting rejection.
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-4, 8-13, 15-22, 26-30 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-33 of U.S. Patent No. US 11924445. Although the claims at issue are not identical, they are not patentably distinct from each other because the broader instant application contains all of the limitations of the more narrowly patented claim 1 as shown in the mapping below.
1. An apparatus, comprising (claim 1 line 1):
Memory (claim 1 line 2);
and one or more processors coupled to the memory, the one or more processors being configured to (claim 1 lines 3-4):
receive, by a neural network compression system, input data for compression by the neural network compression system (claim 1 lines 5-6);
determine a set of updates for the neural network compression system, the set of updates comprising updated model parameters tuned using the input data (claim 1 lines 7-9);
generate, by the neural network compression system using a latent prior, a first bitstream comprising a compressed version of the input data (claim 1 lines 10-11);
generate, by the neural network compression system using the latent prior and a model prior, a second bitstream comprising a compressed version of the updated model parameters (claim 1 lines 12-14);
and output the first bitstream and the second bitstream for transmission to a receiver (claim 1 lines 15-16).
Claims 2-4, 8-13, 15-22, 26-30 are similarly mapped to claims 2-33.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6, 14-17, 19-24, and 30 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Minnen US20200027247 (hereinafter “Minnen”).
Regarding claim 1, Minnen teaches an apparatus, comprising (see paragraph 0059, a data compression system):
memory (see paragraph 0116, the computer storage medium);
and one or more processors coupled to the memory, the one or more processors being configured to (see paragraphs 0117 and 0121, data processing hardware which can receive instructions and data from a memory):
receive, by a neural network compression system, input data for compression by the neural network compression system (see paragraph 0059, the compression system [consisting of neural networks see Figure 3] is configured to process input data to generate compressed representation of the input data);
determine a set of updates for the neural network compression system, the set of updates comprising updated model parameters tuned using the input data (see paragraphs 0072-0074, the code symbols [representing input data] are input into conditional entropy neural 112 to generate respective values of Gaussian mean and standard deviation parameters [updated model parameters, tuned based on context outputs from the input data] for each code symbol probability distribution [a set of updates]);
PNG
media_image1.png
292
408
media_image1.png
Greyscale
generate, by the neural network compression system using a latent prior, a first bitstream comprising a compressed version of the input data (see paragraph 0070 and Figure 1, the generation of the compressed representation of the quantized hyper-prior item 126 [a latent representation of the conditional entropy model based on the latent representation y (latent prior), see paragraph 0069] as a bit string [bitstream] in the neural network [see Figure 3] compression system 100);
PNG
media_image2.png
636
856
media_image2.png
Greyscale
generate, by the neural network compression system using the latent prior and a model prior, a second bitstream comprising a compressed version of the updated model parameters (see paragraph 0076 and Figure 1, the compressing of code symbols 120 to produce compressed code symbols 134 which are represented as a bit string [bitstream] using code symbols 120 from Quantizer 118 [latent prior] and the conditional entropy model [model prior] and the code symbol probability distribution [set of updates comprising of the updated parameters] from entropy model neural network );
and output the first bitstream and the second bitstream for transmission to a receiver (see Figure 1 and paragraph 0060, item 126 compressed representation of quantized hyper-prior and 134 compressed code symbols as bit strings to generate the compressed data which may be transmitted a destination where it may then be retrieved [receiver]).
Regarding claim 2, Minnen teaches the apparatus of claim 1, wherein the second bitstream further comprises a compressed version of the latent prior and a compressed version of the model prior (see paragraph 0076 and Figure 1, the compressing of the code strings which come from the Quantizer [latent prior] and the conditional entropy model from the entropy model neural network [model prior].
Regarding claim 3, Minnen teaches he apparatus of clam 1, wherein the one or more processors are configured to:
generate a concatenated bitstream comprising the first bitstream and the second bitstream (see paragraph 0077, the compressed data is generated by concatenating the bit strings of the code symbols [second bitstream] and hyper-prior [first bitstream]);
and send the concatenated bitstream to the receiver (see paragraph 0060, the compression data [generated from the concatenated bitstream] may be transmitted to a destination where in may be retrieved.
Regarding claim 4, Minnen teaches the apparatus of claim 1, wherein, to generate the second bitstream, the one or more processors are configured to:
entropy encode, by the neural network compression system, the latent prior using the model prior (see paragraph 0076 and Figure 1, see entropy encoding engine 132 entropy encodes the code symbols that are quantized latent representations in accordance to the conditional entropy model [model prior]);
and entropy encode, by the neural network compression system, the updated model parameters using the model prior (see paragraph 0073-74 and Figure 1, see entropy encoding engine 132 which encodes the outputs of the entropy model neural network including the code symbol probability distribution [set of updates comprised of updated parameters] and the conditional entropy model [model priors].
Regarding claim 5, Minnen teaches the apparatus of claim 1, wherein the updated model parameters comprise one or more updated parameters of a decoder model, the one or more updated parameters being tuned using the input data (see paragraph 0071, the hyper-decoder neural network is configured to process the quantized hyper-prior [generated from the input data] to generate a hyper-decoder output [which is used to generate an output that defines the conditional entropy model by specifying respective distribution parameters [updated parameters] for each code symbol probability distribution, see paragraph 0102]).
Regarding claim 6, Minnen teaches the apparatus of claim 1, wherein the updated model parameters comprise one or more updated parameters of an encoder model, the one or more updated parameters being tuned using the input data, wherein the first bitstream is generated by the neural network compression system using the one or more updated parameters (see paragraph 0068-0070 and Figure 1, the hyper-encoder generates a conditional entropy models [the conditional entropy model defines a respective code symbol probability distribution [set of updates comprised of updated parameters], paragraph 0061] representing the input value, which is then compressed to be a bit string in accordance with the entropy model).
Regarding claim 14, Minnen teaches the apparatus of claim 1, wherein the receiver comprises an encoder, and wherein the one or more processors are configured to:
receive, by the encoder, data comprising the first bitstream and the second bitstream (see 0065 and Figure 1, the use of encoders which receives the input data and the latent representation [the data that comprising the first bitstream 126 and second bitstream 134]);
decode, by the decoder, the compressed version of the updated model parameters based on the second bitstream (see paragraph 0083 -0084 and Figure 2, the decompression system use of the hyper-decoder and the entropy model neural network to generate the code symbol probability distribution based on the context output and hyper-decoder output);
PNG
media_image3.png
582
798
media_image3.png
Greyscale
and generate, by the decoder using the set of updated parameters, a reconstructed version of the input data based on the compressed version of the input data in the first bitstream (see paragraph 0085 and Figure 2, the decoder neural network uses the first and second bitstream to generate a reconstruction of the input data).
Regarding claim 15, Minnen teaches the apparatus of claim 1, wherein the one or more processors are configured to:
train the neural network compression system by reducing a rate-distortion and model-rate loss, wherein a model-rate reflects a length of a bitstream for sending model updates (see 0065 and 0087, training the compression system using rate-distortion objective function to give the minimum bit rate for the distortion). Rate distortion performance measures include bit size [code length] of the entropy encoded representation of the latent representation of the data [size of compressed version], bit size of entropy encoded representation of the entropy model [which includes the set of updates], and the difference between the input data and the reconstruction of the data) .
Regarding claim 16, Minnen teaches the apparatus of claim 1, wherein the model prior comprises at least one of an independent Gaussian network prior, an independent Laplace network prior, and an independent Spike and Slab network prior (see paragraph 0072, the use of Gaussian parameters for the conditional entropy model [model prior]).
Regarding claim 17, Minnen teaches the apparatus of claim 1, wherein the apparatus comprises a mobile device (see paragraph 0121, the computer that is suitable for the system can be embedded on a mobile telephone, personal digital assistant, mobile audio or video player, or portable storage device).
Claims 19-24 and 30 are analogous to claims 1-6, respectively, and claims 19-24 are analyzed and rejected similar to 1-6.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7, 12, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Minnen in view of Besenbruch US20230154055 (hereinafter “Besenbruch”).
Regarding claim 7, Minnen teaches the apparatus of claim 6. Minnen teaches to generate the second bitstream, the or more processors are configured to:
encode, by the neural network compression system using the one or more updated parameters, the input data into a latent (see paragraph 0066, 0068 and Figure 1, the encoder neural network 106 processes the input data 102 into latent representation 116 [a latent representation as defined in 0066 is a vector, matrix, or feature map of lower dimensionality representing the input data which is generally considered to be a latent space representation and used as such, however, since Minnen does not explicitly disclose a latent space representation a secondary source will prove obviousness] of the input data, and the latent representation is encoded to generate a latent representation for the conditional entropy model [the conditional entropy model defines a respective code symbol probability distribution [set of updates comprised of updated parameters], paragraph 0061]);
and entropy encode, by the neural network compression system using the latent prior, the latent (see paragraph 0070 and figure 1, the use of the entropy encoding engine to compress the quantized hyper-prior 124 [quantized latent representation] to generate a compressed representation 126 as a bit string) .
Minnen does not explicitly teach a latent space representation.
Besenbruch teaches encode, by the neural network compression system using the one or more updated parameters, the input data into a latent space representation of the input data (see Figure 1, the input of the image [input data] into the encode with the use of latent bottleneck and parameters [see 0590, the use of a tuned parameters for AI-based compression [depicted in Figure 1, paragraph 0542] to aid finding a latent representation of the input data]);
and entropy encode, by the neural network compression system using the latent prior, the latent space representation into the first bitstream (see paragraph 0506, the quantized latent in the latent bottleneck in entropy encoded to provide the bitstream).
Besenbruch teaches encode, by the neural network compression system using the one or more updated parameters, the input data into a latent space representation of the input data (see paragraph 0617 and Figure 81 [paragraph 0455], the encode transforms the input vector [input data] to an M-dimensional latent vector y, transferring the data instance to a latent space, the encoder network generates moments [moments µ and σ of distribution parameters, updated parameters] with the latent space y that are used to normal the latent space);
PNG
media_image4.png
164
406
media_image4.png
Greyscale
and entropy encode, by the neural network compression system using the latent prior, the latent space representation into the first bitstream (see paragraph 0619, the latent space are encoded into a bitstream by the process entropy coding [entropy encoding]).
PNG
media_image5.png
129
402
media_image5.png
Greyscale
Minnen and Besenbruch are analogous art because they are from the same field of endeavor of image compression and transmission with the use of a neural network to receive the input, produce a latent space, and entropy encode the latent into a bitstream.
Before the effective filling data of the invention, it would have been obvious to one of ordinary skill in the art to combine Minnen and Besenbruch for the use of latent space representation. The motivation for doing so would have been to lower encoding cost (Besenbruch, paragraph 0589).
Claim 25 is analogous to claim 7, thus claim 25 is analyzed and rejected similar to 7.
Regarding claim 12, Minnen teaches the apparatus of claim 1. Minnen teaches to determine the set of updates for the neural network compression system, the one or more processors are configured to:
process the input data at the neural network compression system (see paragraph 0059, the compressing system [comprised of multiple neural networks, see Figure 3] is configured to process the input data;
(see paragraph 0072-0074, Gaussian mean and standard deviation parameters [tuned model parameters] for each code symbol probability distribution [a set of updates]).
Minnen does not teach determine one or more losses for the neural network compression system based on the processed input data;
and tune model parameters of the neural network compression system based on the one or more losses.
Besenbruch teaches determine one or more losses for the neural network compression system based on the processed input data (see paragraph 0563 -0564, the use of a loss function for the network with an input of the compressed image [processed input data] );
and tune model parameters of the neural network compression system based on the one or more losses (see paragraph 0563, update the parameters of the model to a goal controlled by the loss function).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Minnen in view of Chen US20220019855 (hereinafter “Chen”).
Regarding claim 18, Minnen teaches the apparatus of claim 1.
Minnen does not teach a camera configured to capture the input data.
Chen teaches a camera configured to capture the input data (see paragraph 0370 and Figure 15, the camera obtains an image [input data] and inputs it into the apparatus).
Minnen and Chen are analogous art because they are from the same field of endeavor of neural network compression method for image processing.
Before the effective filling data of the invention, it would have been obvious to one of ordinary skill in the art to combine Minnen and Chen for the use of a camera to capture the input data [image]. The motivation for doing so would have been to obtain the image and input into the neural network allowing the model to be directly applied to a small mobile device [Chen, 0367 and 0003].
Allowable Subject Matter
Claims 8-11, 13, and 26-29 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see the attached 892 notice of references cited.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMILY R. HAUK whose telephone number is (571)272-5966. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EMILY HAUK/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669