Prosecution Insights
Last updated: April 19, 2026
Application No. 18/276,302

SPATIAL LOCAL ILLUMINATION COMPENSATION

Non-Final OA §103
Filed
Aug 08, 2023
Examiner
LEE, JIMMY S
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Interdigital Madison Patent Holdings SAS
OA Round
5 (Non-Final)
56%
Grant Probability
Moderate
5-6
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
170 granted / 302 resolved
-1.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 302 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 26 January 2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1,27,37-38 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1,5,27-28,37-38 rejected under 35 U.S.C. 103 as being unpatentable over Zheng; Yunfei et al. (US 20110007800 A1) in view of Zhang; Kai et al. (US 20190260996 A1) Regarding claim 1, Zheng teaches, A method for video decoding, (¶82-91 and Fig. 8, “decoder for decoding picture data”) comprising: obtaining, for a current block in a picture, (¶81-83,87-90, and Fig. 8, receiving an “input bitstream” that is an output from encoder 700 of an input “image block”) information (¶83,87, and Fig. 8, “receiving an input bitstream”) of the current block; (¶87 and 90, “input bitstream” received by decoder corresponding to method used for “receiving a substantially uncompressed image block”) determining, for the current block, (¶90, received “image block” which has “illumination compensation model” applied to) parameters (¶90 and Fig. 8, “computing illumination compensation parameters”) for a spatial local illumination compensation (¶90,109, and Fig. 10, “illumination compensation parameters” that best “approximate the current block” by “performing displaced intra prediction”) based on spatially neighboring reconstructed samples of the current block (¶90-92,109 and Fig. 9, “adaptive illumination compensation for intra prediction” that best “approximate the current block” by applying illumination compensation parameters referring to “intra prediction associated with non-predicted picture” denoted by reference number 950 depicted in Fig. 9 with a “corresponding reconstructed” template denoted by “x” adjacent to “original block” that is “denoted by Y”) and corresponding spatially neighboring reconstructed samples (¶90-92 and Fig. 9, “adaptive illumination compensation for intra prediction” applying illumination compensation parameters referring to “reconstructed picture” denoted by number 960 as depicted in Fig. 9 which includes “P1 and P2 respectively represent a first prediction and a second prediction, and Q1 and Q1 respectively represent the templates of the first and the second predictions”) of at least one spatial reference block in the picture, (¶92 and Fig. 9, intra prediction relating to reconstructed picture denoted by number 960 as depicted in Fig. 9 which includes “P1 and P2 respectively represent a first prediction and a second prediction, and Q1 and Q1 respectively represent the templates of the first and the second predictions”, within the same “reconstructed picture”, which appear as reconstructed blocks within the intra prediction section as depicted in fig. 9) wherein the at least one spatial reference block in the picture is a neighboring block (¶92,101, and Fig. 9, intra prediction relating to reconstructed picture denoted by number 960 as depicted in Fig. 9 which includes “P1 and P2 respectively represent a first prediction and a second prediction, and Q1 and Q1 respectively represent the templates of the first and the second predictions”, within the same “reconstructed picture” such that illumination compensation is coded with respect to “surrounding neighboring data”) and decoding the current block (¶90, decoder method for “an image block using adaptive illumination compensation”) using the final prediction (¶90, “the prediction block”) of the current block. (¶90, decoding method where “an image block” having an applied “adaptive illumination compensation parameters on the prediction block”) It should be pointed out that Zheng teaches both computation of illumination compensation parameters and applying intra prediction with illumination compensation corresponds with the signals of blocks in the same reconstructed picture. In order to apply illumination compensation in Zheng ¶90 and 92, the illumination compensation parameters applied to intra prediction implicitly corresponds to residual signals using references from reconstructed blocks that are in the same image of block being predicted. The prediction references are additionally temporally or positionally displaced from the block being predicted, Zheng ¶92 and Fig. 9, which is information that is itself inherently determined before intra prediction occurs. It would be obvious to say that when computing the illumination compensation parameters would also specify parameters of what the intra prediction use as reference for prediction. This allows a technique that can compensate for fade or temporal illumination variation. But does not explicitly teach, obtaining an intra prediction of the current block according to an intra prediction mode; selecting at least one spatial reference block in the picture responsive to the intra prediction mode of the current block; determining a final prediction of the current block by applying spatial local illumination compensation to the intra prediction of the current block based on the determined parameters; However, Zhang teaches additionally, obtaining an intra prediction of the current block (¶177 and fig. 20, “Intra-prediction processing unit 166 may use an intra prediction mode to generate the predictive blocks of the PU based on samples spatially-neighboring blocks”) according to an intra prediction mode; (¶177 and fig. 20, “Intra-prediction processing unit 166”, depicted in fig. 20, may “determine the intra prediction mode for the PU”) selecting at least one spatial reference block in the picture (¶176-177 and fig. 20, “determine neighboring samples for predicting a current block” arranged outside of “a region of a current picture, the region comprising the current block) responsive to the intra prediction mode of the current block; (¶176-177 and fig. 20, intra-prediction processing unit 166 may “generate the predictive blocks of the PU based on samples spatially-neighboring blocks”) determining parameters (¶176 and fig. 20, “derive local illumination compensation information for the current block using the neighboring samples”) for a spatial local illumination compensation (¶176, “local illumination compensation”) determining a final prediction of the current block (¶176 and fig. 20, prediction processing unit 152, depicted in fig. 20, may “generate a prediction block”) by applying spatial local illumination compensation (¶176, “generate a prediction block using the local illumination compensation”) to the intra prediction of the current block (¶176, “Prediction processing unit 152 (e.g., intra-prediction processing unit 166) may generate a prediction block using the local illumination compensation information”) based on the determined parameters; (¶176 and , “local illumination compensation information” of the current block using “neighboring samples”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the illumination compensation of Zheng with the decoding of Zhang which generates prediction using local illumination compensation that corresponds to intra-prediction processing after determining a block has adjacent neighboring samples. This allows for improving processing speed and reduced power consumption. Regarding claim 5, Zheng with Zhang teaches the limitations of claim 1, Zheng teaches additionally, determining a syntax element (¶99-100 and Table 1, “data syntax” signaling “mb_ic_flag” that is specified with “syntax values” such as “mb_ic_flag equal to 1” or “mb_ic_flag equal to 0”) indicating whether the spatial local illumination compensation (¶99-100 and Table 1, “mb_ic_flag” specifies whether or not “illumination compensation is used”) applies on the current block or not. (¶99-100 and Table 1, mb_ic_flag specifying whether or not “illumination compensation is used for the current macroblock”) Regarding claim 27, Zheng teaches, A method comprising video encoding, (¶71-81,89-91, and Fig. 7, “encoder for encoding picture data”) comprising: obtaining, for a current block in a picture, (¶71-81,89-91, and Fig. 7¸ “input of the encoder 700” for encoding input “image block” for a received “input picture”) information (¶81,90, and Fig. 7, “input of the encoder controller 705” as an input of the encoder 700 for an input picture) of the current block; (¶71-81,89-91, and Fig. 7¸ input of the encoder controller 705 for a received “input picture”) determining, for the current block, (¶90, received “image block” which has “illumination compensation model” applied to) parameters (¶90 and Fig. 7, “computing illumination compensation parameters”) for a spatial local illumination compensation (¶90,109, and Fig. 10, “illumination compensation parameters” that best “approximate the current block” by “performing displaced intra prediction”) based on spatially neighboring reconstructed samples (¶90-92 and Fig. 9, “adaptive illumination compensation for intra prediction” applying illumination compensation parameters referring to “intra prediction associated with non-predicted picture” denoted by reference number 950 depicted in Fig. 9 with a “corresponding reconstructed” template denoted by “x” adjacent to “original block” that is “denoted by Y”) of the current block (¶90-92 and Fig. 9, encoded image block corresponding to “original block” denoted by Y) and corresponding spatially neighboring reconstructed samples (¶90-92 and Fig. 9, “adaptive illumination compensation for intra prediction” applying illumination compensation parameters referring to “reconstructed picture” denoted by number 960 as depicted in Fig. 9 which includes “P1 and P2 respectively represent a first prediction and a second prediction, and Q1 and Q1 respectively represent the templates of the first and the second predictions”) of at least one spatial reference block in the picture; (¶92 and Fig. 9, intra prediction relates to reconstructed picture denoted by number 960 as depicted in Fig. 9 which includes “P1 and P2 respectively represent a first prediction and a second prediction, and Q1 and Q1 respectively represent the templates of the first and the second predictions”, within the same “reconstructed picture”, which appear as reconstructed blocks within the intra prediction section as depicted in fig. 9) and encoding the current block (¶90, encoder method for “an image block using adaptive illumination compensation”) using the final prediction (¶90, “the prediction block”) of the current block (¶90, encoder method where “an image block” having an applied “adaptive illumination compensation parameters on the prediction block”) It should be pointed out that Zheng teaches both computation of illumination compensation parameters and applying intra prediction with illumination compensation corresponds with the signals of blocks in the same reconstructed picture. In order to apply illumination compensation in Zheng ¶90 and 92, the illumination compensation parameters applied to intra prediction implicitly corresponds to residual signals using references from reconstructed blocks that are in the same image of block being predicted. The prediction references are additionally temporally or positionally displaced from the block being predicted, Zheng ¶92 and Fig. 9, which is information that is itself inherently determined before intra prediction occurs. It would be obvious to say that when computing the illumination compensation parameters would also specify parameters of what the intra prediction use as reference for prediction. This allows a technique that can compensate for fade or temporal illumination variation. But does not explicitly teach, obtaining an intra prediction of the current block according to an intra prediction mode; selecting at least one spatial reference block in the picture responsive to the intra prediction mode of the current block; determining a final prediction of the current block by applying spatial local illumination compensation to the intra prediction of the current block based on the determined parameters; However, Zhang teaches additionally, obtaining an intra prediction of the current block (¶177 and fig. 20, “Intra-prediction processing unit 166 may use an intra prediction mode to generate the predictive blocks of the PU based on samples spatially-neighboring blocks”) according to an intra prediction mode; (¶177 and fig. 20, “Intra-prediction processing unit 166”, depicted in fig. 20, may “determine the intra prediction mode for the PU”) selecting at least one spatial reference block in the picture (¶176-177 and fig. 20, “determine neighboring samples for predicting a current block” arranged outside of “a region of a current picture, the region comprising the current block) responsive to the intra prediction mode of the current block; (¶176-177 and fig. 20, intra-prediction processing unit 166 may “generate the predictive blocks of the PU based on samples spatially-neighboring blocks”) determining parameters (¶176 and fig. 20, “derive local illumination compensation information for the current block using the neighboring samples”) for a spatial local illumination compensation (¶176, “local illumination compensation”) determining a final prediction of the current block (¶176 and fig. 20, prediction processing unit 152, depicted in fig. 20, may “generate a prediction block”) by applying spatial local illumination compensation (¶176, “generate a prediction block using the local illumination compensation”) to the intra prediction of the current block (¶176, “Prediction processing unit 152 (e.g., intra-prediction processing unit 166) may generate a prediction block using the local illumination compensation information”) based on the determined parameters; (¶176 and , “local illumination compensation information” of the current block using “neighboring samples”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the illumination compensation of Zheng with the decoding of Zhang which generates prediction using local illumination compensation that corresponds to intra-prediction processing after determining a block has adjacent neighboring samples. This allows for improving processing speed and reduced power consumption. Regarding claim 28, dependent on claim 27, it is the encoding claim of decoding claim 5, dependent on claim 1. Refer to rejection of claim 5 to teach the limitations of claim 28. Regarding claim 37, it is the apparatus claim of decoding method claim 8. Zheng teaches additionally, An apparatus for video decoding, (¶82 and Fig. 8, “decoder for decoding picture data” depicted in Fig. 8) comprising one or more processors, (¶46, “a processor” such as a “digital signal processor (“DSP”)”) and at least one memory (¶46, “read-only memory ("ROM") for storing software”) and wherein the one or more processors (¶46,48, and 82, processor being hardware “capable of executing software” to perform the function such as “decoder for decoding picture data”) Refer to the mapping of claim 1 to teach the limitations of claim 37. Regarding claim 38, it is the apparatus claim of encoding method claim 27. Zheng teaches additionally, An apparatus for video encoding, (¶71 and Fig. 7, “encoder for encoding picture data” depicted in Fig. 7) comprising one or more processors, (¶46, “a processor” such as a “digital signal processor (“DSP”)”) and at least one memory (¶46, “read-only memory ("ROM") for storing software”) and wherein the one or more processors (¶46,48, and 71, processor being hardware “capable of executing software” to perform the function such as “encoder for encoding picture data”) Refer to the mapping of claim 27 to teach the limitations of claim 38. Claim(s) 8,31,39-41,45-47 rejected under 35 U.S.C. 103 as being unpatentable over Zheng; Yunfei et al. (US 20110007800 A1) in view of Zhang; Kai et al. (US 20190260996 A1) in view of PARK; Naeri et al. (US 20190200021 A1) Regarding claim 8, Zheng with Zhang teaches the limitations of claim 1, Zhang teaches additionally, the at least one spatial reference block (¶176 and 159, “neighboring samples”) is any of an above adjacent block, (¶176 and 159, neighboring samples as a “row of samples adjacent to a tow row of the current block” such as neighboring PUs “above” for intra-prediction processing) a left adjacent block, (¶176 and 159, neighboring samples as a “column of samples adjacent to a left column of the current block” such as neighboring PUs “to the left” for intra-prediction processing) above-right adjacent block, (¶159, neighboring sample PUs may be “above and to the right” for intra-prediction processing) and an above-left adjacent block. (¶159, neighboring sample PUs may be “above and to the left” for intra-prediction processing) But does not explicitly teach the additional limitations of claim 8, However, Park teaches additionally, the at least one spatial reference block (¶121-122 and Fig. 9, “neighboring samples for deriving IC parameters” used as “neighboring reference samples”) is any of an above adjacent block, (¶121-122 and Fig. 9, “IC parameters” may be derived using the “upper neighboring reference samples”) a left adjacent block, (¶121-122 and Fig. 9, “IC parameters” may be derived” using the left” neighboring reference samples) an above-right adjacent block, (¶127 and Fig. 9, “neighboring reference samples” may be extended to include “upper right neighboring reference samples”) a bottom-left adjacent block, (¶127 and Fig. 9, “neighboring reference samples” may be extended to include “lower left neighboring reference samples”) and an above-left adjacent block. (¶112, “motion vector of a neighboring block” used for deriving a motion vector of a current block can consider neighboring block positioned on the “upper left size of the current block”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the illumination compensation of Zheng with the decoding of Zhang with the neighboring reference samples of Park which can determine reference samples using a neighboring reference sample from multiple directions with respect to the current block. This selection can be adapted based on the size and shape of the block and can provide for improvements to prediction performance and efficiency. Regarding claim 31, dependent on claim 29, it is the encoding claim of decoding claim 8, dependent on claim 6. Refer to rejection of claim 8 to teach the limitations of claim 31. Regarding claim 39, Zheng with Zhang with Park teaches the limitations of claim 8, Zhang teaches additionally, determining a syntax element indicating which adjacent block is selected as the at least one spatial reference block. (¶172 and 149, Prediction processing unit 152 generates “video data based on the syntax elements extracted from the bitstream” such as determining to apply local illumination compensation for the current block “in response to adjacent above samples of the current block, adjacent left samples of the current block, or both”) Regarding claim 40, Zheng with Zhang with Park teaches the limitations of claim 8, Zhang teaches additionally, intra prediction mode of the current block is a non-angular mode, (¶68, “select an intra-prediction mode to generate the prediction block”) the at least one spatial reference block is the above adjacent block and the left adjacent block. (¶68, select an intra-prediction mode to generate the prediction block may “generally be above” and “to the left” of the current block in the same picture) Regarding claim 41, Zheng with Zhang with Park teaches the limitations of claim 40, Zhang teaches additionally, non-angular mode is a planar mode, a DC mode, or a MIP mode. (¶68, select an intra-prediction mode from “planar mode and DC mode” to generate the prediction block) Regarding claim 45, dependent on claim 31, it is the encoding claim of decoding claim 39, dependent on claim 8. Refer to rejection of claim 39 to teach the limitations of claim 45. Regarding claim 46, dependent on claim 31, it is the encoding claim of decoding claim 40, dependent on claim 8. Refer to rejection of claim 40 to teach the limitations of claim 46. Regarding claim 47, dependent on claim 46, it is the encoding claim of decoding claim 41, dependent on claim 40. Refer to rejection of claim 41 to teach the limitations of claim 47. Claim(s) 42-43,48-49 rejected under 35 U.S.C. 103 as being unpatentable over Zheng; Yunfei et al. (US 20110007800 A1) in view of Zhang; Kai et al. (US 20190260996 A1) in view of PARK; Naeri et al. (US 20190200021 A1) in view of LIM; Sung Chang et al. (US 20220109846 A1) Regarding claim 42, Zheng with Zhang with Park teaches the limitations of claim 8, But does not explicitly teach the additional limitations of claim 42, However, Lim teaches additionally, if intra prediction mode of the current block is an angular mode with horizontal direction, (¶261, “when an intra-prediction mode of a current block is a horizontal mode”) the at least one spatial reference block is the left adjacent block, (¶261, “a prediction sample of the current block” generated by “samples of a neighboring block that is located left” a current block) and if the intra prediction mode of the current block is an angular mode with vertical direction, (¶261, “when an intra-prediction mode of a current block is a vertical mode”) the at least one spatial reference block is the above adjacent block. (¶261, “a prediction sample of the current block” generated by “samples of a neighboring block that is located above” a current block) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the illumination compensation of Zheng with the decoding of Zhang with the neighboring reference samples of Park with the intra-prediction mode of Lim which indicate prediction samples by combining samples of neighboring blocks located left or located above. This allows for techniques that can improve efficiency. Regarding claim 43, Zheng with Zhang with Park teaches the limitations of claim 8, But does not explicitly teach the additional limitations of claim 43, However, Lim teaches additionally, if the intra prediction mode of the current block is an angular mode with diagonal direction, (¶261,203-205, and fig. 4, “when an intra-prediction mode of a current block is a diagonal mode” a prediction sample of the current block “is located in the corresponding diagonal direction”) the at least one spatial reference block is the bottom-left adjacent block for 45 degrees, (¶261,203-205, and fig. 4, “a prediction sample of the current block” located in the corresponding diagonal direction such as “intra prediction mode” pointing to the bottom left as depicted in fig. 4) the at least one spatial reference block is the above-left adjacent block for -45 degrees, (¶261,203-205, and fig. 4, “a prediction sample of the current block” located in the corresponding diagonal direction such as “intra prediction mode” pointing to the top left as depicted in fig. 4) and the at least one spatial reference block is the above-right adjacent block for -135 degrees. (¶261,203-205, and fig. 4, “a prediction sample of the current block” located in the corresponding diagonal direction such as “intra prediction mode” pointing to the top right as depicted in fig. 4) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the illumination compensation of Zheng with the decoding of Zhang with the neighboring reference samples of Park with the intra-prediction mode of Lim which indicate prediction samples by combining samples of neighboring blocks located left or located above. This allows for techniques that can improve efficiency. Regarding claim 48, dependent on claim 31, it is the encoding claim of decoding claim 42, dependent on claim 8. Refer to rejection of claim 42 to teach the limitations of claim 48. Regarding claim 49, dependent on claim 31, it is the encoding claim of decoding claim 43, dependent on claim 8. Refer to rejection of claim 43 to teach the limitations of claim 49. Claim(s) 44,50 rejected under 35 U.S.C. 103 as being unpatentable over Zheng; Yunfei et al. (US 20110007800 A1) in view of Zhang; Kai et al. (US 20190260996 A1) in view of PARK; Naeri et al. (US 20190200021 A1) in view of ZHAO; Liang et al. (US 20200396470 A1) Regarding claim 44, Zheng with Zhang with Park teaches the limitations of claim 8, But does not explicitly teach the additional limitations of claim 44, However, Zhao teaches additionally, if the intra prediction mode of the current block is a wide angular mode, (¶107 and fig. 8A-8B, Modes “-14~-1” and “67~80” referred to as “wide-angle intra prediction (WAIP) modes” as presented in fig. 8A-8B) the at least one spatial reference block is the bottom-left adjacent block for a wide angle direction beyond a bottom-left direction, (¶107 and fig. 8A-8B, Mode “-14” with “intraPredAngle” of 512 beyond mode “2” with intraPredAngle 32 as presented in fig. 8A-8B) and the at least one spatial reference block is the above-right adjacent block for a wide angle direction beyond an above-right direction. (¶107 and fig. 8A-8B, Mode “80” with “intraPredAngle” of 512 beyond mode “66” with intraPredAngle 32 as presented in fig. 8A-8B) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the illumination compensation of Zheng with the decoding of Zhang with the neighboring reference samples of Park with the Intra Prediction in VVC of Zhao with intra prediction modes that number from -14 to 80 with intra prediction angles that range from 512 to 512. This allows for coding data that can enable properly decoding or more accurately reconstructing original video data. Regarding claim 50, dependent on claim 31, it is the encoding claim of decoding claim 44, dependent on claim 8. Refer to rejection of claim 44 to teach the limitations of claim 50. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH G USTARIS/Supervisory Patent Examiner, Art Unit 2483 /JIMMY S LEE/Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Aug 08, 2023
Application Filed
Oct 05, 2024
Non-Final Rejection — §103
Jan 10, 2025
Response Filed
Mar 14, 2025
Final Rejection — §103
Jun 04, 2025
Response after Non-Final Action
Jun 18, 2025
Request for Continued Examination
Jun 24, 2025
Response after Non-Final Action
Jul 07, 2025
Non-Final Rejection — §103
Oct 10, 2025
Response Filed
Oct 28, 2025
Final Rejection — §103
Jan 26, 2026
Request for Continued Examination
Feb 10, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604034
METHOD FOR PARTITIONING BLOCK AND DECODING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596190
MILLIMETER WAVE DISPLAY ARRANGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581086
MERGE WITH MVD BASED ON GEOMETRY PARTITION
2y 5m to grant Granted Mar 17, 2026
Patent 12563112
SPATIALLY UNEQUAL STREAMING
2y 5m to grant Granted Feb 24, 2026
Patent 12554017
EBS/TOF/RGB CAMERA FOR SMART SURVEILLANCE AND INTRUDER DETECTION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
56%
Grant Probability
84%
With Interview (+28.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 302 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month