Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election with traverse of Group I claims 47-58. Since there is a lack of unity invention between the two groups and the linking technical feature is not a special technical feature in light of Theis et al. US 10,623,775 abstract and claim 1, this argument is unpersuasive. Accordingly claims 59-66 have been withdrawn.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 47-58 have been rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claims are either directed to a system, which is one of the statutory categories of invention. (Step 1: YES)
The examiner has identified system Claim 58 as the claim that represents the claimed invention for analysis and is similar to Claims 47 and 56.
Claim 15 recites the limitations of (additional elements emphasized in bold and are considered to be parsed from the remaining abstract idea): An apparatus comprising: at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: assign a tolerance value for loss variance of loss terms in a first set of pre-determined losses; disable gradients with respect to a first subset of the first set of pre-determined losses; minimize losses in a second subset of the first set of pre-determined losses till a tolerance for the first subset is violated, wherein the first subset and the second subset are disjoint subsets; switch roles of the first subset and the second subset, and repeat the previous steps; and stop repeating when one or more stopping conditions are met.
Which is a process that, under its broadest which is a process that, under its broadest reasonable interpretation, covers performance of the limitation(s) as a Mental process (concept performed in the human mind) of calculating and minimizing loss functions and updating them until a stopping point is reached.
If a claim limitation, under its broadest reasonable interpretation (BRI), covers performance of the limitation as a certain method of a fundamental economic practice, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas.
Similarly if a claim limitation under its BRI, covers performance of the limitation in the human mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. (Claims can recite a mental process even if they are claimed as being performed on a computer Gottschalk v. Benson, 409 U.S. 63; "Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015).)
Accordingly, the claim recites an abstract idea. (Step 2A-Prong 1: YES. The claims are abstract)
This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). In claim 58, the program for causing a computer to execute the method is just using generic computer components.
The computer hardware is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore claim 15 is directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using computer hardware amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception. Mere instructions to implement an abstract idea on or with the use of generic computer components, cannot provide an inventive concept - rendering the claim patent ineligible. Thus claim 58 is not patent eligible. (Step 2B: NO. The claims do not provide significantly more)
The dependent claims further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination: they each recite additional calculation steps which can be performed in the human mind.
Therefore, the dependent claims are directed to an abstract idea. Thus, the aforementioned claims are not patent-eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 47-58 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Coelho et al. US 20220201316 (“Coelho”).
Re 47: Coelho teaches an apparatus comprising:
at least one processor (¶40; Fig. 2); and
at least one non-transitory memory including computer program code (¶41; Fig. 2);
wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform (¶40-41, 93-115):
compute predetermined loss terms based on original data and decoded data (¶93);
train one or more neural networks of a system by using predetermined loss terms (¶115; Figs. 4-5, 8, 12);
update weights for one or more of other loss terms (¶115; Fig. 8); and
determine trade-offs between predetermined objectives of the system (¶115).
Re 48: wherein the predetermined loss terms and the other loss terms comprise one or more distortion metrics (¶115; abstract).
Re 49: wherein the one or more distortion metrics comprise mean squared error (MSE) losses (¶93), a sum of absolute differences (L1 norm), a sum of squared differences (L2 norm), or a multi-scale structural similarity index measure (MS-SSIM).
Re 50: wherein the apparatus is further caused to combine one or more metrics with same or different weights (¶115).
Re 51: wherein the one or more neural networks of the system comprises one or more of a neural network encoder, a neural network decoder, or a probability model (¶84; Figs. 4-5, 8, and 12-13).
Re 52: wherein the apparatus is further caused to:set a non-zero weight for the predetermined loss terms; and set a zero weight for the one or more of the other loss terms (¶89, 115) .
Re 53: wherein the one or more of the other loss terms do not comprise the predetermined loss terms.
Re 54: wherein the weights for one or more other losses are changed gradually in order to adapt the one or more neural networks non-abruptly (¶¶80-91, 115, 127).
Re 55: wherein the weights for one or more other losses are changed based on a priority of the one or more other losses (¶¶80-91, 115, 127).
Re 56: at least one processor (¶40; Fig. 2); and at least one non-transitory memory including computer program code (¶40-41; Fig. 2); wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: use a first set of pre-determined losses to dominate a gradient flow at a neural network warm-up phase (¶78-115: discusses using a combined loss with multipliers alpha and beta that balance difference loss components, where alpha and beta are predetermined, also teaching Loss(RDC) is used to first train the model in a warm up; Fig. 8); ease influence of the first set of pre-determined losses at an end or substantially at the end of the neural network warm-up phase (¶78-115: teaching a phased training/tuning approach: use Loss(RDC) first and then use Loss(partition) to fine-tune; Fig. 8); improve a task performance at the end or substantially at the end of the neural network warm-up phase (¶33: “A loss function that uses rate and distortion as described herein can result in a better trained model than such loss functions alone, hence improving coding efficiency.”; ¶78-115; Fig. 8); stop improving the task performance, after a predetermined time, to decrease a bit rate loss (¶25-32, 78-115; Fig. 8); and gradually increase a weight of the bit rate loss to achieve a pre-determined bit-rate or a pre- determined task performance (¶25-32, 78-115: e.g., “Other values for the variables α and β are possible to weight the partition loss function and the rate-distortion cost loss function. In some implementations, the machine-learning model may be trained/tuned using the loss function by applying a greater weight to the partition loss function than to the rate-distortion cost loss function.”; Fig. 8).
Re 57: wherein the apparatus is further caused to assign a tolerance value for a loss variance of each loss term in the first set of pre-determined losses (¶29, 115).
Re 58: at least one processor (¶40-41; Fig. 2); and at least one non-transitory memory including computer program code (¶40-41; Fig. 2); wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform (¶29, 40-41, 80-115; Fig. 2): assign a tolerance value for loss variance of loss terms in a first set of pre-determined losses (¶4, 115); disable gradients with respect to a first subset of the first set of pre-determined losses (¶115: teaches weighting wherein the weights can take 0 which is equivalent to disabling); minimize losses in a second subset of the first set of pre-determined losses till a tolerance for the first subset is violated, wherein the first subset and the second subset are disjoint subsets (¶4, 111, 115: identifies two distinct and disjoint loss components (partition loss vs. RDC loss)); switch roles of the first subset and the second subset, and repeat the previous steps; and stop repeating when one or more stopping conditions are met (¶115: teaching switching between the two loss components—train first using Loss(RDC), then fine tune using Loss(partition)—and also teaches that alpha and beta may be tuned during training, supporting iterative rebalancing of loss emphasis).
Conclusion
Relevant prior art considered: US 11729406 B2 teaching methods and an apparatus for compressing video content using deep generative models. One example method generally includes receiving video content for compression.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERALD J SUFLETA II whose telephone number is (571)272-4279. The examiner can normally be reached M-F 9AM-6PM EDT/EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABDULMAJEED AZIZ can be reached at (571) 270-5046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
GERALD J. SUFLETA II
Primary Examiner
Art Unit 2875
/GERALD J SUFLETA II/Primary Examiner, Art Unit 2875