DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/25 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chandran et al. (US 2020/0380261) in view of Zhang et al. (US 2021/0067785).
Regarding claim 1, Chandran discloses an apparatus (see 100 in fig. 1A) comprising: machine learning (ML) engine circuitry (see 116 in fig. 1A); and a motion unit (see 108 in fig. 1A) comprising circuitry configured to generate an indication of whether the ML engine circuitry should process an input video frame (see 120 and 122 in fig. 1A), based on an analysis of motion characteristics of the input video frame (see 108 in fig. 1A); wherein the machine learning engine circuitry is selectively activated to process the input video frame further, based on the indication (see 120 & 116 in fig. 1A).
Although Chandran discloses wherein when activated the machine learning engine circuitry is configured to generate encoding information for a video encoder based on the input video frame (see 126a in fig. 1A), it is noted that Chandran does not provide the particular wherein the encoding information is an encoding-control information.
However, Zhang discloses a encoding system with machine learning generating encoding information wherein the encoding information is an encoding-control information (e.g. see 116 in fig. 1).
Given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Zhang teachings of quantization parameter estimation into Chandran quantization estimation for the benefit of improving compression efficiency and/or video quality.
Regarding claims 2, 9 and 16, Chandran further discloses wherein the motion unit is further configured to identify one or more objects in the input frame based on the motion characteristics (see 108 in fig. 1A; see 213 in fig. 2B), and when activated the ML engine circuitry is configured to execute an ML model on only the one or more objects identified by the motion unit (see 116 in fig. 1A).
Regarding claims 3, 10 and 17, Chandran further discloses wherein responsive to receiving an indication that a new object has not been detected in the input video frame and a change of scene has not been detected for the input video frame (see fig. 2E), the input video frame is encoded by the video encoder without use of a ML model (see 118 in fig. 1A).
Regarding claims 4, 11 and 18, the references further discloses wherein the encoding- control information generated by the ML engine circuitry comprises a quantization parameter (QP) map (see Zhang 116 in fig. 1).
Regarding claims 5, 12 and 19, the references further discloses comprising a downscaling unit configured to downscale the input video frame to generate a downscaled version (e.g. see Chandran ¶ [0012]), the downscaled version being supplied to the ML engine circuitry for generating the encoding-control information when the ML engine circuitry is activated (e.g. see Chandran ¶ [0012] with 116 in fig. 1A; see Zhang 116 in fig. 1A).
Regarding claims 6, 13 and 20, Chandran further discloses wherein, when activated, the ML engine circuitry is configured to execute a ML model on only the downscaled version of the input video frame (see 116 in fig. 1A; e.g. see ¶ [0012]).
Regarding claims 7 and 14, the references further discloses wherein the indication comprises at least a new object has been detected in the input video frame compared to a reference frame based on motion characteristics derived from motion vectors (see Chandran “New object” in fig. 2D), the indication being used to selectively activate the ML engine circuitry to generate the encoding-control information (see Zhang 116 in fig. 1).
Regarding claim 8, the claim(s) recite analogous limitations to claim 1, and is/are therefore rejected on the same premise.
Regarding claim 15, the claim(s) recite analogous limitations to claim 1, and is/are therefore rejected on the same premise.
Response to Arguments
Applicant's arguments with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection.
Citation of Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Piacentino et al. (US 2021/0160422), discloses object and scene change detection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD T TORRENTE whose telephone number is (571)270-3702. The examiner can normally be reached M-F: 6:45-3:15 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at (571) 272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD T TORRENTE/Primary Examiner, Art Unit 2485