Prosecution Insights
Last updated: April 19, 2026
Application No. 18/526,539

VIDEO CODING FOR MACHINES (VCM) ENCODER AND DECODER FOR COMBINED LOSSLESS AND LOSSY ENCODING

Final Rejection §103§112
Filed
Dec 01, 2023
Examiner
LE, PETER D
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Op Solutions LLC
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
97%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
491 granted / 613 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
35 currently pending
Career history
648
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 613 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA The amendments to the claims, filed on 10/25/2025, have been entered and made of record. Claims 3-5, 8, and 12-15 are cancelled. Claims 1, 2, 6, 7, 9-11 and 16-19 are pending with claims 1, 2, 6, 7, and 9-11 being amended and claims 16-19 being amended. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 is dependent of cancelled claim 8. Claim 11 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 11 is dependent of cancelled claim 8. Response to Arguments Arguments presented in the Remarks (“Remarks") filed on 10/25/2025 have been fully considered, but are rendered moot in view of the new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 6, 7, 11, 16 and 17 rejected under 35 U.S.C. 103 as being unpatentable over Kang et al. (“Kang”) [U.S Patent Application Pub. 2022/0210435 A1] in view of Keisler et al. (“Keisler”) [U.S Patent No. 10,248,663 B1] Regarding claim 1, Kang meets the claim limitations as follows: A video encoder for machine-based applications (i.e. ‘VCM’) [para. 0004, 0012: ‘machine vision applications’; ‘VCM’], the encoder comprising [Fig. 1]: a feature extractor (i.e. ‘110’) [Fig. 1, 3, 5, 7, 16] comprising a partial convolutional neural network (i.e. ‘a deep learning model’) [Fig. 1: ‘170’; para. 0007, 0046, 0061: ‘The feature extractor 110 operates based on a deep learning model’; ‘The neural network interface 170 is a module for storing information (e.g., parameters) of deep learning models’] configured to receive a source video (i.e. Input Image) [Fig. 1, 7, 16] and extract at least one feature (i.e. ‘ff1, … ‘) from the source video [Fig. 1, 7, 16]; and a MPEG video encoder (i.e. ‘150’) [Fig. 1, 16; para. 0006, 0053, 0058, 0114, 0150: ‘MPEG’; ‘HEVC’; ‘VVC’], the video encoder configured to receive the at least one feature and encode [para. 0057-0058: ‘such as HEVC, VVC, or the like’; ‘a deep learning-based autoencoder’] the at least one feature as a sub-picture (i.e. ‘sub-block’) [para. 0105: ‘set a sub-block in the output feature map as a reference block’] in a encoded bitstream [Fig. 1] for a machine-based application (i.e. ‘VCM’) [para. 0004, 0012: ‘machine vision applications’; ‘VCM’]. Kang does not disclose explicitly the following claim limitations (emphasis added): a feature extractor comprising a partial convolutional neural network configured to receive a source video. However in the same field of endeavor Keisler discloses the deficient claim as follows: a feature extractor comprising a partial convolutional neural network [Fig. 4B; col. 5, ll. 45-65: ‘feature extraction is performed using a (partial) convolution neural net’] configured to receive a source video (i.e. ‘the raw pixel’) [col. 5, ll. 45-65]. Kang and Keisler are combinable because they are from the same field of video coding for machine vision. It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Kang and Keisler as motivation to include a partial convolutional neural network for feature extraction in aerial imagery [Figs. 7-17; col. 3, ll. 10-25]. Regarding claim 2, Kang meets the claim limitations as follows: The encoder of claim 1 wherein the feature extractor (i.e. ‘110’) [Fig. 1, 7; para. 0046, 0082: ‘feature map ft’ ] is configured to identify the sub-picture [para. 0105: ‘set a sub-block in the output feature map as a reference block’]. Regarding claim 6, Kang meets the claim limitations as follows: The encoder of claim 1, wherein the MPEG encoder is one of an AVC encoder or VVC encoder (i.e. ‘HEVC, VVC, or the like’) [para. 0053: ‘The feature encoder 140 may encode … such as … HEVC … or VVC’]. Regarding claim 7, Kang meets the claim limitations as follows: The encoder of claim 1, wherein multiple feature types (i.e. ‘ff1, ff2, … , ffN’) [Fig. 3, 7; para. 0046, 0082: ‘to extract a feature map’; ‘feature map ft’] are extracted by the feature extractor [Fig. 7: ‘110’] and each feature type is encoded in a corresponding one of multiple sub-pictures. [Fig. 1, 7; para. 0046, 0082, 0113: ‘feature map ft’; ‘may select a key feature map from among the feature maps extracted by the feature extractor’] is configured to identify the sub-picture [para. 0105: ‘set a sub-block in the output feature map as a reference block’]. Regarding claim 11, Kang meets the claim limitations as follows: The encoder of claim 8 [Note: claim 8 is canceled], wherein signaling the sub-picture further comprises signaling a type of feature (i.e. ‘ff1, ff2, … , ffN’) [Fig. 3, 7; para. 0046, 0082: ‘to extract a feature map’; ‘feature map ft’] included in the sub-picture [Fig. 7 shows signaling bitstream comprising different types of feature: bref, b1, res, … bN, res]. Regarding claim 16, Kang meets the claim limitations as follows: The encoder of claim 1, wherein the feature extractor extracts a plurality of features (i.e. ‘ff1, ff2, … , ffN’) [Fig. 3, 7; para. 0046, 0082, 0113: ‘to extract a feature map’; ‘feature map ft’; ‘may select a key feature map from among the feature maps extracted by the feature extractor’] and the MPEG encoder encodes the features into a plurality of subpictures (i.e. ‘ff1,res, ff2,res, … , ffN,res’) Fig. 7: Feature Encoder 140], the encoder being further configured to include signaling information about each sub-picture in the bitstream [Fig. 7 shows signaling bitstream comprising different types of feature: bref, b1, res, … bN, res]. Regarding claim 17, Kang meets the claim limitations as follows: The encoder of claim 16, wherein the signaling information includes feature type (i.e. ‘ff1, ff2, … , ffN’) [Fig. 3, 7; para. 0046, 0082: ‘to extract a feature map’; ‘feature map ft’] of the sub-picture [Fig. 7 shows signaling bitstream comprising different types of feature: bref, b1, res, … bN, res]. Claims 9, 18 and 19 rejected under 35 U.S.C. 103 as being unpatentable over Kang et al. (“Kang”) [U.S Patent Application Pub. 2022/0210435 A1] in view of Keisler et al. (“Keisler”) [U.S Patent No. 10,248,663 B1] in further view of Wu et al. (“Wu”) [US 10,397,518 B1] Regarding claim 9, Kang meets the claim limitations as follows: The encoder of claim 1, wherein the MPEG encoder is further configured to signal the sub-picture location to a decoder [It is obvious in view of Wu]. Kang does not disclose explicitly the following claim limitations (emphasis added): wherein the MPEG encoder is further configured to signal the sub-picture location to a decoder. However in the same field of endeavor Wu discloses the deficient claim as follows: wherein the MPEG encoder is further configured to signal the sub-picture location to a decoder [col. 1, ll. 55-65: ‘The number and locations of the slices or tiles in each frame are communicated to the decoder in the metadata associated with the video stream’]. Kang and Wu are combinable because they are from the same field of video coding. It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Kang and Wu as motivation to understand signaling the sub-picture location to a decoder is obvious in accordance with the H.264 or HEVC standards. Regarding claim 18, Kang meets the claim limitations as follows: The encoder of claim 16, wherein the signaling information includes sub-picture location [It is obvious in view of Wu]. Kang does not disclose explicitly the following claim limitations (emphasis added): wherein the signaling information includes sub-picture location. However in the same field of endeavor Wu discloses the deficient claim as follows: wherein the signaling information includes sub-picture location [col. 1, ll. 55-65: ‘The number and locations of the slices or tiles in each frame are communicated to the decoder in the metadata associated with the video stream’]. Kang and Wu are combinable because they are from the same field of video coding. It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Kang and Wu as motivation to understand signaling the sub-picture location to a decoder is obvious in accordance with the H.264 or HEVC standards. Regarding claim 19, Kang meets the claim limitations as follows: The encoder of claim 16, wherein the signaling information includes a feature type (i.e. ‘ff1, ff2, … , ffN’) [Fig. 3, 7; para. 0046, 0082: ‘to extract a feature map’; ‘feature map ft’] of the sub-picture [Fig. 7 shows signaling bitstream comprising different types of feature: bref, b1, res, … bN, res] and a location of the sub-picture [It is obvious in view of Wu]. Kang does not disclose explicitly the following claim limitations (emphasis added): wherein the signaling information includes a feature type of the sub-picture and a location of the sub-picture. However in the same field of endeavor Wu discloses the deficient claim as follows: wherein the signaling information includes …. sub-picture location of the sub-picture [col. 1, ll. 55-65: ‘The number and locations of the slices or tiles in each frame are communicated to the decoder in the metadata associated with the video stream’]. Kang and Wu are combinable because they are from the same field of video coding. It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Kang and Wu as motivation to understand signaling the sub-picture location to a decoder is obvious in accordance with the H.264 or HEVC standards. Claim 10 rejected under 35 U.S.C. 103 as being unpatentable over Kang et al. (“Kang”) [U.S Patent Application Pub. 2022/0210435 A1] in view of Keisler et al. (“Keisler”) [U.S Patent No. 10,248,663 B1] in further view of Liu et al. (“Liu”) [US 2022/0116627 A1] Regarding claim 10, Kang meets the claim limitations as follows: The encoder of claim 8 [Note: claim 8 is canceled], wherein signaling the sub-picture further comprises signaling a sequence of frames including the sub-picture [It is obvious]. Kang does not disclose explicitly the following claim limitations (emphasis added): wherein signaling the sub-picture further comprises signaling a sequence of frames including the sub-picture. However in the same field of endeavor Liu discloses the deficient claim as follows: wherein signaling the sub-picture further comprises signaling a sequence of frames [para. 0025, 0029-0030, 0033, 0039: ‘a sequence of video frames that is output’] including the sub-picture. Kang and Liu are combinable because they are from the same field of video coding for machine vision. It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Kang and Liu as motivation to understand signaling a sequence of frames for machine vision is obvious. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER D LE whose telephone number is (571)270-5382. The examiner can normally be reached on Monday - Alternate Friday: 10AM-6:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH PERUNGAVOOR can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER D LE/ Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Apr 22, 2025
Non-Final Rejection — §103, §112
Oct 25, 2025
Response Filed
Dec 15, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582306
SCANNER FOR DENTAL TREATMENT, AND DATA TRANSMISSION METHOD OF SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12585104
IMAGE PICKUP MODULE, ENDOSCOPE, AND METHOD FOR MANUFACTURING IMAGE PICKUP MODULE
2y 5m to grant Granted Mar 24, 2026
Patent 12574478
SECURITY OPERATIONS OF PARKED VEHICLES
2y 5m to grant Granted Mar 10, 2026
Patent 12568184
TECHNIQUES TO GENERATE INTERPOLATED VIDEO FRAMES
2y 5m to grant Granted Mar 03, 2026
Patent 12568210
METHOD AND DEVICE FOR ENCODING/DECODING IMAGE, AND RECORDING MEDIUM IN WHICH BITSTREAM IS STORED
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
97%
With Interview (+16.9%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 613 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month