Prosecution Insights
Last updated: April 19, 2026
Application No. 18/448,845

ATTENTION-BASED REFINEMENT FOR DEPTH COMPLETION

Non-Final OA §103
Filed
Aug 11, 2023
Examiner
PATEL, JAYESH A
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
739 granted / 887 resolved
+21.3% vs TC avg
Moderate +5% lift
Without
With
+5.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
920
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
25.0%
-15.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 887 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/16/2026 has been entered. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-28 are rejected under 35 U.S.C. 103 as being unpatentable over NPL1 (SDformer: Efficient End-to-End Transformer for Depth Completion, Jian Qian et al., IEEE, 2022, Pages 56-61) hereafter NPL1 in view of Xiong et al. (US12333730) hereafter Xiong and in further view of Singh et al. (US20240127596) hereafter Singh. 1. Regarding claim 1, NPL1 discloses a processor-implemented method (fig 1, pages 56-57 discloses “In practice, the SDformer obtains state-of-the-art results against the CNN-based depth completion models with lower computing loads (i.e processor implemented method performed by at least one processor/computer, page 59 section B discloses NVIDIA RTX 3090 GPUs with 24GB GPU memory) and parameters on the NYU Depth V2 and KITTI DC datasets”) performed by at least one processor, the processor-implemented method comprising: receiving, by an artificial neural network (ANN), an input comprising an image and a sparse depth measurement (fig 1, pages 56-57, page 57 col 1 “In particular, the input module employs a convolution layer to extract the features from depth maps and RGB images, then concatenates these features to the two subsequent modules. The U-shaped encoder-decoder Transformer is mainly composed of a series of SDformer blocks, each of which utilizes the Different Window-based Multi-Scale Self-Attention(DWSA) and the Gated Feed-Forward Network(GFFN) for extracting local and global information of depth features.”, “Section II “Proposed method” shows and discloses CNN based (i.e artificial neural network) SDformer consisting of an input comprising RGB image and a sparse depth measurement meeting the claim limitations); extracting, by the ANN, visual features of the input (fig 1, page 57 col 1 discloses “In particular, the input module employs a convolution layer to extract the features from depth maps and RGB images (visual features), then concatenates these features to the two subsequent modules. The U-shaped encoder-decoder Transformer is mainly composed of a series of SDformer blocks, each of which utilizes the Different Window-based Multi-Scale Self-Attention (DWSA) and the Gated Feed-Forward Network(GFFN) for extracting local and global information of depth features. and “we first apply a 3×3 convolutional layer with LeakyReLU [16] to extract low-level features of depth Ps ∈ RC1×H×W and RGB image Pc ∈ RC2×H×W” (to produce multi-scale visual features) meeting the claim limitations); applying, by the ANN, a self-attention mechanism to (fig 1, page 57 cols 1-2 discloses “We first apply a 3×3 convolutional layer with LeakyReLU [16] to extract low-level features of depth Ps ∈ RC1×H×W and RGB image Pc ∈ RC2×H×W .Then, we concatenate them in the channel dimensions P0 ∈ RC×H×W . Next, The concatenated features P0 pass through the U-shaped encoder-decoder SDformer blocks to get the deep features Pd ∈ R2C×H×W . Each stage of the encoding decoding contains several transformer blocks, which utilize the self-attention mechanism to capture long-range dependencies and reduce the computational cost of the feature maps with the different windows.” meeting the claim limitations), generating, by the ANN, a dense depth map based on the set of attended multi-scale visual features (fig1, page 57 col 1-2 discloses “Each stage of the encoding decoding contains several transformer blocks, which utilize the self-attention mechanism to capture long-range dependencies and reduce the computational cost of the feature maps with the different windows. During each step of encoding, we employ shuffling and unshuffling procedures to down-sample and up-sample the features. After that, the deep features Pd ∈ R2C×H×W concatenate with the shallow features P0 ∈ RC×H×W and go through the refinement stage to get the enriched feature maps Pr ∈ R3C×H×W. Finally, a 3 × 3 convolution layer is applied to the refinement features Pr ∈ R3C×H×W to generate a final depth prediction P ∈ R1×H×W “meeting the claim limitations). As seen above NPL1 discloses input visual features and producing multi-scale visual features. NPL1 is silent and fail to disclose the input image at multiple different scales, applying self-attention to a subset of features and the subset comprising fewer than all of the multi-scale visual features. Xiong discloses the input image at multiple different scales within the input (fig 3 element 330 and col 9 lines 4-14 discloses the input image at multiple different scales within the input meeting the above claim limitations). Before the effective filing date of the invention was made, NPL1 and Xiong are combinable because they are from the same field of endeavor and are analogous art of image processing. The suggestion/motivation would be a faster and cost-effective method/system at col 9 lines 44-51. As seen above NPL1 discloses the self-attention mechanism applied to the multi-scale features. NPL1 and Xiong however are silent and fails to disclose self-attention to a subset of features and the subset comprising fewer than all of the multi-scale visual features. Singh discloses using a self-attention to a subset of features and the subset comprising fewer than all of the multi-scale visual features (para 0103 discloses a self-attention to a subset of features and the subset comprising fewer than all of the multi-scale visual features). Before the effective filing date of the invention was made, Singh, Xiong and NPL1 are combinable because they are from the same filed of endeavor and are analogous art of image processing. The suggestion/motivation would be a faster and efficient system/method at para 0103. Therefore, it would be obvious and within one of ordinary skill in the art to have recognized the advantages of Singh and Xiong in the method of NPL1 to obtain the invention as specified in claim 1. 2. Regarding claim 2, NPL1, Xiong and Singh disclose the processor-implemented method of claim 1. NPL1 disclose further in which the sparse depth measurement comprises a light detection and ranging (LiDAR) measurement (pages 58-59 discloses “The sparse depth maps were generated from HDL-64 LIDAR meeting the claim limitations, examiner notes that due to the recital of or only one is required to be met), or 3. Regarding claim 3, NPL1, Xiong and Singh disclose the processor-implemented method of claim 1. NPL1 disclose further wherein the processor-implemented method is performed by at least one processor (fig 1, pages 56-57 disclose “In practice, the SDformer obtains state-of-the-art results against the CNN-based depth completion models with lower computing loads (i.e processor implemented method performed by at least one processor/computer, page 59 section B discloses NVIDIA RTX 3090 GPUs with 24GB GPU memory) and parameters on the NYU Depth V2 and KITTI DC datasets”) (fig 1 element 100 (mobile device) with 140, Col 4 lines 42-59 shows and discloses the method performed by at least one processor 140 of the mobile device (i.e the smartphone) meeting the above claim limitations). NPL1 and Xiong combined would therefore meet the limitations of claim 3. 4. Regarding claim 4, NPL1, Xiong and Singh disclose the processor-implemented method of claim 1. NPL1 discloses on page 56 the method used in various computer vision applications such as robot navigation (i.e a robotics application as claimed in claim 4), augmented reality, and motion planning. Xiong also discloses further implementing the dense depth map in an extended reality (XR) application (fig 1 element 162, col 4 lines 60-64, col 7 lines 63 through col 8 lines 9 shows and discloses implementing the dense depth map in an extended reality (XR) application, examiner notes that due to the recital of or only one is required to be met), 5. Regarding claim 5, NPL1, Xiong and Singh disclose the processor-implemented method of claim 1. NPL1 disclose further comprising processing, by the ANN, the multi-scale visual features by applying a depth-separable convolution to the multi-scale visual features (fig 1 and page 57 discloses “The U-shaped encoder-decoder Transformer is mainly composed of a series of SDformer blocks, each of which utilizes the Different Window-based Multi-Scale Self-Attention(DWSA) and the Gated Feed-Forward Network(GFFN) for extracting local and global information of depth features. Finally, we refine the predicted features from the input module and the U-shaped encoder-decoder SDformer blocks to get the enriching depth features and apply a convolution layer to obtain the dense depth map.” meeting the above claim limitations) and Singh discloses the subset of the features (para 0103). NPL1, Xiong and Singh together would therefore meet the limitations of claim 5. 6. Regarding claim 6, NPL1, Xiong and Singh disclose the processor-implemented method of claim 1. NPL1 disclose further in which the ANN comprises a sparse-to-dense (S2D) network (fig 1, page 56 col 1 discloses In this work, we propose a different window-based Transformer architecture for depth completion tasks named Sparse-to-Dense Transformer (SDformer) and page 57 shows a sparse to dense CNN network meeting the claim limitations). 7. Regarding claim 7, NPL1, Xiong and Singh disclose the processor-implemented method of claim 1. NPL1 disclose further in which the ANN comprises a convolutional neural network (CNN) (pages 56-57 and fig 1 shows and discloses a CNN based method (i.e the artificial neural network ANN comprises convolutional neural network) meeting the claim limitations). 8. Regarding claim 8, NPL1, Xiong and Singh disclose the processor-implemented method of claim 1. NPL1 disclose further in which the image is captured by a single camera (page 58 section A discloses the dataset consists of a RGB and depth images captured by a Microsoft Kinect Camera (i.e a single camera) meeting the claim limitations). 9. Regarding claim 9, NPL1, Xiong and Singh disclose the processor-implemented method of claim 8. NPL1 also discloses on page 56 the method used in various computer vision applications such as robot navigation, augmented reality, and motion planning. NPL1 also discloses a single camera (page 58 section A discloses the dataset consists of a RGB and depth images captured by a Microsoft Kinect Camera (i.e a single camera). NPL1 however fails to disclose wherein the processor-implemented method is performed by at least one processor of a mobile device, wherein the single camera is included in the mobile device. Xiong discloses wherein the processor-implemented method is performed by at least one processor of a mobile device, wherein the single camera is included in the mobile device (fig 1 element 100 (mobile device) with 140 (processor), Col 4 lines 42-59, col 6 lines 33 shows and discloses the method performed by at least one processor 140 of the mobile device (i.e the smartphone), the mobile device having one or more cameras 186 (i.e a single camera) included in the mobile device 100 meeting the above claim limitations). NPL1 and Xiong combined would therefore meet the limitations of claim 9. 10. Claim 10 is a corresponding apparatus claim of claim 1. See the explanation of claim 1. NPL1 discloses and shows an apparatus, comprising: at least one memory; and at least one processor coupled to the at least one memory, the at least one processor configured to in (Figs 1-3 and on page 59 section B “Our method was implemented with the Pytorch library and was trained on two NVIDIA RTX 3090 GPUs (at least one processor configured to perform the steps/functions) with 24GB GPU memory” meeting the claim limitations). 11. Claim 11 is a corresponding apparatus claim of claim 2. See the corresponding explanation of claim 2. 12. Claim 12 is a corresponding apparatus claim of claim 3. See the corresponding explanation of claim 3. 13. Claim 13 is a corresponding apparatus claim of claim 4. See the corresponding explanation of claim 4. 14. Claim 14 is a corresponding apparatus claim of claim 5. See the corresponding explanation of claim 5. 15. Claim 15 is a corresponding apparatus claim of claim 6. See the corresponding explanation of claim 6. 16. Claim 16 is a corresponding apparatus claim of claim 7. See the corresponding explanation of claim 7. 17. Claim 17 is a corresponding apparatus claim of claim 8. See the corresponding explanation of claim 8. 18. Claim 18 is a corresponding apparatus claim of claim 9. See the corresponding explanation of claim 9. 19. Regarding claim 19, NPL1 discloses a non-transitory computer-readable medium having program code recorded thereon, the program code executed by at least one processor and comprising: program (“A non-transitory computer-readable medium having program code recorded thereon, the program code executed by at least one processor to perform the steps” would be obvious in view of “fig 1, pages 56-57 discloses “In practice, the SDformer obtains state-of-the-art results against the CNN-based depth completion models with lower computing loads (i.e a computer with a processor and a memory storing the instructions), page 59 section B discloses NVIDIA RTX 3090 GPUs with 24GB GPU memory i.e “A non-transitory computer-readable medium having program code recorded thereon, the program code executed by at least one processor and comprising: program code to perform the steps/functions”) program code to receive, by an artificial neural network (ANN), an input comprising an image and a sparse depth measurement (fig 1, pages 56-57, page 57 col 1 “In particular, the input module employs a convolution layer to extract the features from depth maps and RGB images, then concatenates these features to the two subsequent modules. The U-shaped encoder-decoder Transformer is mainly composed of a series of SDformer blocks, each of which utilizes the Different Window-based Multi-Scale Self-Attention(DWSA) and the Gated Feed-Forward Network(GFFN) for extracting local and global information of depth features.”, “Section II “Proposed method” shows and discloses CNN based (i.e artificial neural network) SDformer consisting of an input comprising RGB image and a sparse depth measurement meeting the claim limitations); program code to extract, by the ANN, visual features of the input (fig 1, page 57 col 1 discloses “In particular, the input module employs a convolution layer to extract the features from depth maps and RGB images (visual features), then concatenates these features to the two subsequent modules. The U-shaped encoder-decoder Transformer is mainly composed of a series of SDformer blocks, each of which utilizes the Different Window-based Multi-Scale Self-Attention(DWSA) and the Gated Feed-Forward Network(GFFN) for extracting local and global information of depth features. and “we first apply a 3×3 convolutional layer with LeakyReLU [16] to extract low-level features of depth Ps ∈ RC1×H×W and RGB image Pc ∈ RC2×H×W” (to produce multi-scale visual features) meeting the claim limitations); program code to apply, by the ANN, a self-attention mechanism to (fig 1, page 57 cols 1-2 discloses “We first apply a 3×3 convolutional layer with LeakyReLU [16] to extract low-level features of depth Ps ∈ RC1×H×W and RGB image Pc ∈ RC2×H×W. Then, we concatenate them in the channel dimensions P0 ∈ RC×H×W . Next, The concatenated features P0 pass through the U-shaped encoder-decoder SDformer blocks to get the deep features Pd ∈ R2C×H×W . Each stage of the encoding decoding contains several transformer blocks, which utilize the self-attention mechanism to capture long-range dependencies and reduce the computational cost of the feature maps with the different windows..” meeting the claim limitations), program code to generate, by the ANN, a dense depth map based on the set of attended multi-scale visual features (fig1, page 57 col 1-2 discloses “Each stage of the encoding decoding contains several transformer blocks, which utilize the self-attention mechanism to capture long-range dependencies and reduce the computational cost of the feature maps with the different windows. During each step of encoding, we employ shuffling and unshuffling procedures to down-sample and up-sample the features. After that, the deep features Pd ∈ R2C×H×W concatenate with the shallow features P0 ∈ RC×H×W and go through the refinement stage to get the enriched feature maps Pr ∈ R3C×H×W. Finally, a 3 × 3 convolution layer is applied to the refinement features Pr ∈ R3C×H×W to generate a final depth prediction P ∈ R1×H×W “meeting the claim limitations). As seen above NPL1 discloses input visual features and producing multi-scale visual features. NPL1 is silent and fail to disclose the input image at multiple different scales, applying self-attention to a subset of features and the subset comprising fewer than all of the multi-scale visual features. Xiong discloses the input image at multiple different scales within the input (fig 3 element 330 and col 9 lines 4-14 the input image at multiple different scales within the input meeting the above claim limitations). Before the effective filing date of the invention was made, NPL1 and Xiong are combinable because they are from the same field of endeavor and are analogous art of image processing. The suggestion/motivation would be a faster and cost-effective method/system at col 9 lines 44-51. As seen above NPL1 discloses the self-attention mechanism applied to the multi-scale features. NPL1 and Xiong however are silent and fails to disclose self-attention to a subset of features and the subset comprising fewer than all of the multi-scale visual features. Singh discloses using a self-attention to a subset of features and the subset comprising fewer than all of the multi-scale visual features (para 0103 discloses a self-attention to a subset of features and the subset comprising fewer than all of the multi-scale visual features). Before the effective filing date of the invention was made, Singh, Xiong and NPL1 are combinable because they are from the same filed of endeavor and are analogous art of image processing. The suggestion/motivation would be a faster and efficient system/method at para 0103. Therefore, it would be obvious and within one of ordinary skill in the art to have recognized the advantages of Singh and Xiong in the method of NPL1 to obtain the invention as specified in claim 19. 20. Claim 20 is a corresponding non-transitory computer readable medium claim of claim 2. See the corresponding explanation of claim 2. 21. Claim 21 is a corresponding non-transitory computer readable medium claim of claim 5. See the corresponding explanation of claim 5. 22. Claim 22 is a corresponding non-transitory computer readable medium claim of claim 6. See the corresponding explanation of claim 6. 23. Claim 23 is a corresponding non-transitory computer readable medium claim of claim 8. See the corresponding explanation of claim 8. 24. Claim 24 is a corresponding apparatus claim of claim 1. See the corresponding explanation of claim 1. NPL discloses and shows the means for (structure/architecture) in figs 1-3 and pages 56-58. 25. Claim 25 is a corresponding apparatus claim of claim 2. See the explanation of claim 2. 26. Claim 26 is a corresponding apparatus claim of claim 5. See the explanation of claim 5. 27. Claim 27 is a corresponding apparatus claim of claim 6. See the explanation of claim 6. 28. Claim 28 is a corresponding apparatus claim of claim 8. See the explanation of claim 8. Examiner's Note: Examiner has cited figures, and paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested for the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Examiner has also cited references in PTO892 but not relied on, which are relevant and pertinent to the applicant’s disclosure, and may also be reading (anticipatory/obvious) on the claims and claimed limitations. Applicant is advised to consider the references in preparing the response/amendments in-order to expedite the prosecution. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAYESH PATEL whose telephone number is (571)270-1227. The examiner can normally be reached IFW Mon-FRI. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAYESH A PATEL/Primary Examiner, Art Unit 2677 JAYESH PATEL Primary Examiner Art Unit 2677
Read full office action

Prosecution Timeline

Aug 11, 2023
Application Filed
Aug 12, 2025
Non-Final Rejection — §103
Oct 28, 2025
Response Filed
Nov 14, 2025
Final Rejection — §103
Jan 12, 2026
Examiner Interview Summary
Jan 12, 2026
Applicant Interview (Telephonic)
Jan 13, 2026
Response after Non-Final Action
Jan 16, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597170
METHOD AND APPARATUS FOR IMMERSIVE VIDEO ENCODING AND DECODING, AND METHOD FOR TRANSMITTING A BITSTREAM GENERATED BY THE IMMERSIVE VIDEO ENCODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579770
DETECTION SYSTEM, DETECTION METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12561949
CONDITIONAL PROCEDURAL MODEL GENERATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555346
Automatic Working System, Automatic Walking Device and Control Method Therefor, and Computer-Readable Storage Medium
2y 5m to grant Granted Feb 17, 2026
Patent 12536636
METHOD AND SYSTEM FOR EVALUATING QUALITY OF A DOCUMENT
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
88%
With Interview (+5.2%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 887 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month