Prosecution Insights
Last updated: April 19, 2026
Application No. 18/322,310

PANOPTIC MASK PROPAGATION WITH ACTIVE REGIONS

Final Rejection §103§112
Filed
May 23, 2023
Examiner
HUBER, JEREMIAH CHARLES
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Adobe Inc.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
456 granted / 659 resolved
+11.2% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
34 currently pending
Career history
693
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 659 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because they relate to newly amended claim limitations for which new art Vetro is provided. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-9 and 14-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant, regards as the invention. Claims 5-6, 9 and 14-15 are directed to performing operations on ‘a’ or ‘the’ subset of tokens. However, as amended, independent claims 1 and 10, from which the above claims depend, require both a first and a second subset of tokens. It is unclear whether the operations of claims 5-6, 9 and 14-15 are to be performed on the first subset of tokens, the second subset of tokens or both subsets of tokens. For the purposes of examination the examiner will interpret the claims as requiring the actions describe din claims 5-6, 9 and 14-15 to be performed on at least one subset of the tokens. Claims 7-8 and 16-17 recite a second subset of tokens. However, as amended, independent claims 1 and 10 , from which the above claims depend, already describe a second subset of tokens. Thus it is unclear whether the second subset of tokens in claims 7-8 and 16-17 is the same ‘second’ subset as required by claims 1 and 10 or a an additional, third, subset of tokens. For the purposes of examination the examiner will treat claims 7-8 and 16-17 as reciting a ‘third’ subset of tokens instead of a second subset which appears to be in accordance with the applicants intention, and the description of Fig. 3 in the specification. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Schulze (20160150235) in view of Ogawa (2023/0154016) and in further view of Vetro et al (6,650,705). In regard to claim 1 Schulze discloses a method comprising: receiving a frame depicting an object, the frame being one of a plurality of frames of a video sequence (Schulze Fig. 2 and pars 38-41 note receiving video from camera 210 further note Fig. 9A-B and pars 79-86 particularly note par. 80 a current frame including a moving object); encoding a plurality of tokens of the frame, each token being a representation of a grid of pixels of a frame (Schulze Fig. 9A-B and pars 79-86 note foreground block 922 as a ‘token’ that represents a grid of foreground pixels); selecting first and second subset of tokens for decoding (Schulze pars 79-86 note dividing a block into foreground and background ‘tokens’ further note Fig. 4 and par. 44 a frame comprises a plurality of blocks, and hence a first subset of foreground ‘tokens’ and a second subset of background ‘tokens’ finally note Fig. 6A and pars 46-60 note a first subset of tokens, e.g. the foreground tokens, are selected to be encoded and decoded by an upper decoder as shown in Fig. 5 and 6A which shows layer L1 being decoded by the upper decoding process into quantized output Q1 residual block R1’ and decoded layer L1’) decoding, by a decoder, a first subset of tokens using a decoder (Schulze Fig. 6A and pars 59-60 note decoding the selected subset of tokens e.g. the foreground, into the decoded representation ‘L1’). Schulze suggests that tokens may be divided into subsets based on moving and non-moving objects (Schulze par. 80 note foreground layer corresponds to a moving object). It is noted that Schulze does not disclose details of selecting subsets of tokens based on a confidence score. However, Ogawa discloses a method for selecting a subset of ‘tokens’ corresponding to a moving object based on a likelihood of a token satisfying a confidence threshold, wherein the confidence threshold based on a confidence score of the token including a past object in a past frame (Ogawa Fig. 5 and pars 48-52 note tracking the true motion of an object between frames, particularly note par. 52 note determining the position of the object in the past frame in the current frame based on a cross correlation score (confidence score) being above a threshold value). It is therefore considered obvious that one of ordinary skill in the art before the effective filing date of the invention would recognize the advantage of incorporating a method of selecting ‘tokens’ corresponding to moving objects as taught by Ogawa in the invention of Schulze in order to gain the advantage of obtaining accurate object motion as suggested by Ogawa (Ogawa par. 3 note high accuracy tracking). Schulze and Ogawa teaches identifying a subset of tokens as a first, foreground, subset of tokens based on the likelihood of tokens satisfying a confidence threshold and identifying tokens as a second, ‘background’ subset of tokens based on the likelihood of the tokens not satisfying confidence thresholds as noted above. It is noted that Shulze and Ogawa does not disclose details of excluding a second, background, subset of tokens from the decoder. However, Vetro discloses encoding video by selecting first and second subsets of image data corresponding to a background and foreground objects (Vetro generally note Figs 5-7 and col. 8 line 18 to col. 10 line 2 for details of encoding and selecting image subsets, further note Fig. 8 and col. 10 lines 6-19 first and second extracted objects including a background in sequence 801 and a foreground in sequence 802). Vetro further discloses excluding a second, background, subset of image data from the decoder (Vetro Fig. 8 and col. 10 lines 21-30 note not coding the first, background object, for a time period, or coding the object at a lower framerate for a period and hence not coding a portion of the frames of the object as illustrated in Fig. 4, further note that by omitting coding of the object, the subset of image data corresponding to the first, background object, for the omitted frames will be excluded from decoding note col. 5 lines 56-64 note decoding of encoded image data being well known). In regard to claim 2 refer to the statements made in the rejection of claim 1 above. Schulze further discloses: encoding a representation of visual information in the past frame (Schulze par. 80 note previously encoded frame) ; creating an affinity matrix by comparing each token of the plurality of tokens to the encoded representation of visual information in the past frame (Schulze pars 79-86 note motion vectors indicating the location of the same pixels in a previous frame); encoding a mask probability of a particular past object in the past frame (Schulze par. 82 note generating masks M1 and M2 based on the motion vector information); and obtaining a memory readout for the particular past object by applying the encoded mask probability of the particular past object in the past frame to the affinity matrix (Schulze pars 79-80 note motion vectors represent predictions from pixels in past frames, further note Figs 5-6A and pars. 46-60 note predictions are obtained from the past frame (memory) to predict the pixels of the current block) In regard to claim 3 refer to the statements made in the rejection of claim 2 above. encoding an existence of each particular masked object of a plurality of masked objects (Schulze pars 46-60 and Fig. 5 note encoding each of a plurality of masked layers, any number of layers may be used further note pars 79-86 and Fig. 9A one or more masked layers may correspond to a moving object). In regard to claim 4 refer to the statements made in the rejection of claim 3 above. Schulze and Ogawa further disclose: determining the confidence score of the token including the past object in the past frame by applying the existence of the particular masked object to the memory readout for the particular past object (Ogawa pars 48-52 note determining a cross correlation between the identified object features of a past frame and detected features of a current frame). In regard to claim 5 refer to the statements made in the rejection of claim 1 above. Schulze further discloses that the decoder is a set-based encoder (Schulze par. 44 note a frame is encoded as a set of blocks) and further comprising: indexing a subset of tokens (Schulze pars 66-67 note blocks are indexed by position x and y) ; and decoding the indexed subset of tokens (Schulze Fig. 6A note decoding image blocks); In regard to claim 6 refer to the statements made in the rejection of claim 1 above. Schulze further discloses that the decoder is a convolutional decoder (Schulze pars 61-63 note either selecting or merging layers for ‘convolutional decoder’) and further comprising: masking each of the plurality of tokens of the frame that are not included in the selected subset of tokens (Schulze Fig. 9A and pars 79-86 note masking pixels which are not included in the selected layer e.g. the background pixels are masked in the foreground layer). In regard to claim 7 refer to the statements made in the rejection of claim 1 above. Schulze and Ogawa further disclose: receiving a second frame including at least two objects (Schulze Fig. 2 and pars. 38-41 note receiving video data comprising multiple frames further note Figs. 8 and 10 for examples of other frames encoded using a layered representation) ; encoding a second plurality of tokens of the second frame, each token being a representation of a grid of pixels of the second frame (Schulze Fig. 9A and pars 79-86 note par. 80 note dividing a frame into sets of foreground ‘tokens’ 922 and background ‘tokens’ 921 representing a moving foreground object and a stationary background object) ; and selecting a third subset of tokens for decoding based wherein the third subset of tokens includes a first set of tokens corresponding to a first object and a second set of tokens corresponding to a second object (Schulze par. 80 note ‘tokens’ 922 and 924 correspond to two objects respectively, one moving foreground object and one stationary background object). Schulze suggests that tokens may be divided into subsets based on moving and non-moving objects (Schulze par. 80 note foreground layer corresponds to a moving object). It is noted that Schulze does not disclose details of selecting subsets of tokens based on a confidence score. However, Ogawa discloses a method for selecting a subset of ‘tokens’ corresponding to a moving object based on a likelihood of a token satisfying a confidence threshold, wherein the confidence threshold based on a confidence score of the token including a past object in a past frame (Ogawa Fig. 5 and pars 48-52 note tracking the true motion of an object between frames, particularly note par. 52 note determining the position of the object in the past frame in the current frame based on a cross correlation score (confidence score) being above a threshold value). It is therefore considered obvious that one of ordinary skill in the art before the effective filing date of the invention would recognize the advantage of incorporating a method of selecting ‘tokens’ corresponding to moving objects as taught by Ogawa in the invention of Schulze in order to gain the advantage of obtaining accurate object motion as suggested by Ogawa (Ogawa par. 3 note high accuracy tracking). In regard to claim 8 refer to the statements made in the rejection of claim 7 above. Schulze further discloses: determining that the first set of tokens representing a grid of pixels in the second frame is a number of pixels apart from the second set of tokens representing another grid of pixels in the second frame (Schulze Fig. 9A note the foreground and background tokens are at least one pixel apart, as they do not overlap) ; masking each of the plurality of tokens of the second frame that are not included in the third subset of tokens (Schulze Figs. 8A, 9A and 10A note masks M1 and M2 are applied to all pixels or ‘tokens’ which do not apply to the current layer, further note pars 46-47 more than two layers may be generated and thus the masks M1 and M2 would ask any ‘tokens’ corresponding to a third or more layers); combining the first set of tokens of the third subset and the second set of tokens of the third subset into a single encoded representation (Schulze Fig. 5 and pars 46-60 note combining all ‘token’ layers into a single encoded representation 570); and decoding the single encoded representation (Schulze Fig. 6A and pars 46-60 note decoding the encoded representation 570). In regard to claim 9 refer to the statements made in the rejection of claim 1 above. Schulze further discloses that the first subset of tokens is an active region corresponding to the object of the frame (Schulze par. 80 note the foreground ‘tokens’ correspond to a moving object which represents an active region of the frame as opposed to a stationary object). Claims 10-20 describe an apparatus and a non-transitory computer readable media that cause a processor and a memory to perform steps that correspond to the method described in claims 1-9. Refer to the statements made in regard to claims 1-9 above for the rejection of claims 10-20 which will not be repeated here for brevity. In particular regard to claims 10 and 18 Schulze further discloses an apparatus comprising a processor and a memory that may receive instructions from a computer readable medium (Schulz Fig. 2 and pars 38-41 note processor 220 and memory 230, the memory 230 including non-transitory media storing instructions to be executed by the processor). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 10200695 B2 Dutt; Yashwant et al. US 20030112867 A1 Hannuksela, Miska et al. US 20040239762 A1 Porikli, Fatih M. et al. US 20110096836 A1 Einarsson; Torbjorn US 20130198794 A1 DHARMAPURIKAR; Makarand US 20140132789 A1 Koyama; Masae US 20150334398 A1 Socek; Daniel et al. US 20190089981 A1 LV; Zhuoyi et al. US 20210133475 A1 Sudar; Ron et al. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMIAH CHARLES HALLENBECK-HUBER whose telephone number is (571)272-5248. The examiner can normally be reached Monday to Friday from 9 A.M. to 5 P.M. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEREMIAH C HALLENBECK-HUBER/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

May 23, 2023
Application Filed
Jul 19, 2025
Non-Final Rejection — §103, §112
Oct 14, 2025
Interview Requested
Oct 20, 2025
Applicant Interview (Telephonic)
Oct 20, 2025
Examiner Interview Summary
Oct 21, 2025
Response Filed
Feb 07, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604012
CODING METHOD, ENCODER, AND DECODER
2y 5m to grant Granted Apr 14, 2026
Patent 12604026
MOVING PICTURE CODING METHOD, MOVING PICTURE DECODING METHOD, MOVING PICTURE CODING APPARATUS, MOVING PICTURE DECODING APPARATUS, AND MOVING PICTURE CODING AND DECODING APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12593043
VIDEO COMPRESSION AT SCENE CHANGES FOR LOW LATENCY INTERACTIVE EXPERIENCE
2y 5m to grant Granted Mar 31, 2026
Patent 12593046
SUB-BLOCK DIVISION-BASED IMAGE ENCODING/DECODING METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12587670
VIDEO CODING AND DECODING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
82%
With Interview (+13.1%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 659 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month