Prosecution Insights
Last updated: April 19, 2026
Application No. 18/242,090

Techniques For Channel State Information (CSI) Compression

Final Rejection §103
Filed
Sep 05, 2023
Examiner
LEMA LEMOS, LUIS GUILLERMO
Art Unit
2419
Tech Center
2400 — Computer Networks
Assignee
MediaTek Inc.
OA Round
2 (Final)
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-58.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
36 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
17.4%
-22.6% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to communications filed on 01/08/2026. Claims 1-6, 8-19 are pending and rejected. Claims 7, 20 are cancelled. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 8-13, 15, are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (US 20240313838 A1) (hereinafter “Wu’) in view of Yoo et al (US 20210266763 A1) (hereinafter “Yoo”). Regarding claim 1, Wu discloses a method, comprising: acquiring, by a processor of a user equipment (UE) that is in wireless communication with a base station node (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme), channel state information (CSI) at least associated with the wireless communication (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme); and compressing (see Fig. 3, para. [0109] discloses joint training compression schemes supporting techniques for CSI and channel compression switching, that may be implemented in machine learning operation at a base station or at a UE), by the processor (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme), the CSI into CSI feedback for the base station node via an artificial intelligence (Al) or machine-learning (ML)-based encoder (see Fig. 3, para. [0109]-[0110] discloses join training implemented in Machine Learning in the UE; the auto encoder may involve one or more neural networks of the auto encoder in the UE; the auto encoder may provide an output which may be referred as a feedback vector). Wu fails to disclose but Yoo discloses that implements at least one of convolutional projection (see Fig. 4D, Fig. 5, para. [0060]-[0061];[0071]; [0078]-[0080] discloses Convolutional neural network; each layer of a convolutional network considered as basis for projection; Artificial Intelligence (AI)/ Machine Learning (ML) algorithms for wireless communications), expandable kernels (see para. [0062] discloses the deep convolutional network layer may apply convolutional kernels, that may be a 5x5 kernel that generates 28x28 maps), and multi-head re-attention (MHRA) (This part is optional), wherein the MHRA defines new attention based on a linear combination of an attention score for query-key pairs to generate new attention maps with features for use by the AI or ML-based encoder that processes the CSI (This part is optional). Wu and Yoo are considered analogous to the claimed invention because both are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the convolutional projection as described by Yoo. The motivation to combine both references would come from improving CSI feedback operation. Regarding claim 2, Wu discloses a method wherein the CSI acquired by the UE is raw CSI (see para. [0038] discloses a base station transmitting a reference signal (CSI-RS) that may be monitored or received by a UE. A receiving UE may perform calculations based on measured or predicted characteristics of the reference signal to support various techniques of estimation), further comprising, prior to compressing the CSI into the CSI feedback, pre-processing (see Fig. 6, para [0131];[0134] discloses machine learning process; pre-processing performed according to a sequence of operation on the input values , such as format compatible with the machine learning algorithm), by the processor, the CSI into pre-processed CSI using a pre- processing function of the UE (see Fig. 6, para [0134] discloses preprocessing performed according to a sequence of operation on the input values, such as format compatible with the machine learning algorithm). Regarding claim 8, Wu discloses a method wherein the Al or ML-based encoder includes at least one of a convolutional transformer (CVT) block (This part is optional), a convolutional transformer with re-attention (CVT-RA) block (This part is optional), or expandable kernels to process the CSI (see para. [0039];[0096] ];[0133]-[0134] machine learning techniques for channel compression including Convolutional Neural Networks, describe machine learning algorithm with input layers). Regarding claim 9, Wu discloses a method wherein the Al or ML-based encoder includes expandable kernels and at least of a convolution neural network (CNN) (see para. [0039];[0096];[0133]-[0134] machine learning techniques for channel compression including Convolutional Neural Networks, describe machine learning algorithm with input layers), a deep neural network (DNN) (This part is optional), or a transformer to process the CSI (This part is optional). Regarding claim 10, Wu discloses a method, comprising: receiving, at a base station node, channel state information (CSI) feedback from a user equipment (UE) (see para. [0084] discloses the UE may report feedback that indicated precoding and the feedback may correspond to a number of configured beams), the CSI feedback being generated from CSI acquired by the UE via an artificial intelligence (AI) (This part is optional) or machine-learning (ML)-based encoder of the UE that implements at least one of convolutional projection (see para. [0096];[0107]-[0110] discloses CSI compression schemes that may include and encoder such as CSI report training a decoder, such techniques may include one or more neural networks, that may be implemented in one or both of a transmission device (e.g UE) the neural networks include convolutional neural networks), expandable kernels (This part is optional), and multi-head re-attention (MHRA) (This part is optional) to compress the CSI into the CSI feedback (see Fig. 2, Fig. 3 [0039];[0089] discloses CSI report may be compressed or decompressed; the wireless communication system may support CSI report compression), wherein the MHRA defines new attention based on a linear combination of an attention score for query-key pairs to generate new attention maps with features for use by the Al or ML-based encoder that processes the CSI (This part is optional); and generating, by a processor of the base station node, reconstructed CSI by at least decompressing the CSI feedback via an Al or ML-based decoder of the base station node (see para. [0039];[0090];[0095];[0096] discloses CSI report or related channel information may be compressed or decompressed in, machine learning (include one or more Neural Networks that may be implemented at the UE or the base station) may be used to support compression schemes; machine learning techniques may be used for the wireless communication system to support CSI compression schemes). Regarding claim 11, Wu discloses a method further comprising performing, by a processor of the base station node, one or more tasks based on the reconstructed CSI (see Fig. 10, para. [0033]; [0151] devices supporting the CSI techniques, and processor). Regarding claim 12, Wu discloses a method wherein the one or more tasks include scheduling beamforming for one or more antennas of the base station node (see para. [0079]-[0084] discloses base station equipped with multiple antennas employing MIMO or beamforming; the use of multiple antennas to conduct beamforming operations , some signals may be transmitted multiple times in different directions). Regarding claim 13, Wu discloses method wherein the base station node is a gNodeB of a wireless carrier network (see para. [0048];[0050] discloses next generation gNodeB or gNB; the UE able to communicate with the gNBs). Regarding claim 15, Wu discloses an apparatus implementable in a user equipment (UE) that is in wireless communication with a base station node (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme), comprising: a transceiver configured to communicate wirelessly (see Fig. 8, para [0146];[0148] discloses device that supports CSI and channel compression techniques; transceiver module in the UE); and a processor coupled to the transceiver and configured to perform operations comprising: acquiring channel state information (CSI) at least associated with the wireless communication (see para [0149] discloses communication manager or various components configured to perform various operations such as receiving monitoring, transmitting, etc.); and compressing the CSI into CSI feedback for the base station node via an artificial intelligence (Al) (This part is optional) or machine-learning (ML)-based encoder (see Fig. 3, para. [0109]-[0110] discloses join training implemented in Machine Learning in the UE; the auto encoder may involve one or more neural networks of the auto encoder in the UE; the auto encoder may provide an output which may be referred as a feedback vector). Wu fails to disclose but Yoo discloses that implements at least one of convolutional projection (see Fig. 4D, Fig. 5, para. [0060]-[0061];[0071]; [0078]-[0080] discloses Convolutional neural network; each layer of a convolutional network considered as basis for projection; Artificial Intelligence (AI)/ Machine Learning (ML) algorithms for wireless communications), expandable kernels (see para. [0062] discloses the deep convolutional network layer may apply convolutional kernels, that may be a 5x5 kernel that generates 28x28 maps), and multi-head re-attention (MHRA) (This part is optional), wherein the MHRA defines new attention based on a linear combination of an attention score for query-key pairs to generate new attention maps with features for use by the AI or ML-based encoder that processes the CSI (This part is optional). Wu and Yoo are considered analogous to the claimed invention because both are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the convolutional projection as described by Yoo. The motivation to combine both references would come from improving CSI feedback operation. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (US 20240313838 A1) (hereinafter “Wu’) as applied to claim 10 above in view of Chavva et al (US 20210351885 A1) (hereinafter “Chavva”). Regarding claim 14, Wu discloses method (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose but Chavva teaches wherein the Al or ML-based decoder includes at least one of a convolutional transformer (CVT) block (see para. [0047]-[0048] discloses a Neural Network comprising fully connected layers and convolutional layers) or a convolutional transformer with re-attention (CVT-RA) block with an MHRA function to process the CSI feedback (This part is optional). Wu and Chavva are considered analogous to the claimed invention because both are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the convolutional projection as described by Chavva. The motivation to combine the references would come from improving CSI feedback operation. Claims 3, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (US 20240313838 A1) (hereinafter “Wu’) in view of Yoo et al (US 20210266763 A1) (hereinafter “Yoo”) as applied to claims 1, 15 above, and further in view of Vahdat et al (US 20210144779 A) and in view of Chavva et al (US 20210351885 A1) (hereinafter “Chavva”) Regarding claim 3, Wu discloses a method (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose a method wherein implementing the convolutional projection includes: applying a square-shaped kernel that moves around a layer of CSI elements to capture correlations between the CSI elements for each of Key, Query, and Value parameters; However, Vahdat teaches applying a square-shaped kernel that moves around a layer of CSI elements to capture correlations between the CSI elements for each of Key, Query, and Value parameters (see Fig. 5, para. [0061] discloses the multi head attention (MHA) receives the vector of real values in relation to each UE. Each attention head uses abstractions called Key, Query and Value, that can create attention scores. The attention scores are used to create latent representation that encodes the context information, (i.e. all UE CSIs)); Vahdat fails to disclose and applying a flattening function to flatten the correlations in the CSI elements as captured for each of the Key, Query, and Value parameters into a corresponding word for each of the Key, Query, and Value parameters. However, Chavva teaches applying a flattening function to flatten the correlations (see Fig. 11 para. [0171] discloses a NN classifier that includes a flatten layer that can convert a 4x3 input into a 12x1 output). Wu, Vahdat and Chavva are considered analogous to the claimed invention because they are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the convolutional projection as described by Vahdat and Chavva. The motivation to combine the references would come from improving CSI feedback operation. Regarding claim 16, Wu discloses an apparatus (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose an apparatus wherein implementing the convolutional projection includes: applying a square-shaped kernel that moves around a layer of CSI elements to capture correlations between the CSI elements for each of Key, Query, and Value parameters; and applying a flattening function to flatten the correlations in the CSI elements as captured for each of the Key, Query, and Value parameters into a corresponding word for each of the Key, Query, and Value parameters. However, Vahdat teaches wherein implementing the convolutional projection includes: applying a square-shaped kernel that moves around a layer of CSI elements to capture correlations between the CSI elements for each of Key, Query, and Value parameters (see Fig. 5, para. [0061] discloses the MHA receives the vector of real values in relation to each UE. Each attention head uses abstractions called Key, Query and Value, that can create attention scores. The attention scores are used to create latent representation that encodes the context information, (i.e. all UE CSIs); Vahdat fails to disclose and applying a flattening function to flatten the correlations in the CSI elements as captured for each of the Key, Query, and Value parameters into a corresponding word for each of the Key, Query, and Value parameters. However, Chavva teaches and applying a flattening function to flatten the correlations see Fig. 11 para. [0171] discloses a NN classifier that includes a flatten layer that can convert a 4x3 input into a 12x1 output). Wu, Vahdat and Chavva are considered analogous to the claimed invention because they are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the convolutional projection as described by Vahdat and Chavva. The motivation to combine the references would come from improving CSI feedback operation. Claims 4, 5, 17, 18 is rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (US 20240313838 A1) (hereinafter “Wu’) in view of Yoo et al (US 20210266763 A1) (hereinafter “Yoo”) as applied to claims 1, 15 above, and further in view of Chen et al (US 20250055531 A1). Regarding claim 4, Wu discloses a method (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose but Chen teaches a method further comprising, prior to compressing the CSI into the CSI feedback (see para. [0004];[0008] discloses CSI compression feedback method), translating, by the processor, the CSI that is in an antenna-frequency domain to a beam-delay domain to reduce an entropy of the CSI (see para. [0057] discloses reduction of feedback overhead by transforming CSI information from frequency domain to angle delay domain). Wu and Abebe are considered analogous to the claimed invention because both are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the beam-delay domain as described by Chen. The motivation to combine both references would come from reducing overhead of CSI feedback operation. Regarding claim 5, Wu discloses a method (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose but Chen discloses a method wherein implementing the expandable kernels includes adjusting sizes of kernels as kernel striding occurs over an input layer of CSI elements in the beam-delay domain based on magnitudes of delays indicated in the beam-delay domain (see para. [0161]-[0162] discloses restored convolutional neural network including seven convolutional layers, with increase kernel convolution) . Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the expandable kernels as described by Chen. The motivation to combine both references would come from improving CSI feedback operation. Regarding claim 17, Wu discloses and apparatus (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose but Chen an apparatus wherein the operations further comprise, prior to compressing the CSI into the CSI feedback (see para. [0004];[0008] discloses CSI compression feedback method), translating the CSI that is in an antenna-frequency domain to a beam-delay domain to reduce an entropy of the CSI (see para. [0057] discloses reduction of feedback overhead by transforming CSI information from frequency domain to angle delay domain). Wu and Chen are considered analogous to the claimed invention because both are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the beam-delay domain as described by Chen. The motivation to combine both references would come from reducing overhead of CSI feedback operation. Regarding claim 18, Wu discloses and apparatus (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose but Chen discloses wherein implementing the expandable kernels includes adjusting sizes of kernels as kernel striding occurs over an input layer of CSI elements in the beam-delay domain based on magnitudes of delays indicated in the beam-delay domain (see para. [0161]-[0162] discloses restored convolutional neural network including seven convolutional layers, with increase kernel convolution) . Wu and Abebe are considered analogous to the claimed invention because both are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the size of kernels as described by Abebe. The motivation to combine both references would come from improving CSI feedback operation. Claims 6, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al (US 20240313838 A1) (hereinafter “Wu’) in view of Yoo et al (US 20210266763 A1) (hereinafter “Yoo”) as applied to claims 1, 15 above, and further in view of Vahdat et al (US 20210144779 A). Regarding claim 6, Wu discloses a method (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose a method wherein implementing the MHRA includes processing a layer of CSI elements via a convolutional transformer with re- attention (CVT-RA) block of the Al or ML-based encoder that comprises an MHRA function. However, Vahdat teaches a method wherein implementing the MHRA includes processing a layer of CSI elements via a convolutional transformer with re- attention (CVT-RA) block of the Al or ML-based encoder that comprises an MHRA function (see Fig. 3, Fig. 5, para. [0034];[0058]-[0061] discloses a multi head attention mechanism (MHA) ; neural network including a multi head attention encoder that may include layers that perform functions, the input is the CSI for multiple UEs). Wu and Vahdat are considered analogous to the claimed invention because both are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the processing as described by Vahdat. The motivation to combine both references would come from improving CSI feedback operation. Regarding claim 19, Wu discloses an apparatus (see Fig. 10, para. [0005] discloses apparatus for wireless communication at a UE including processor and memory, configured to receive configuration associated with a first channel state information scheme). Wu fails to disclose an apparatus wherein implementing the MHRA includes processing a layer of CSI elements via a convolutional transformer with re-attention (CVT-RA) block of the Al or ML-based encoder that comprises an MHRA function. However, Vahdat teaches an apparatus wherein implementing the MHRA includes processing a layer of CSI elements via a convolutional transformer with re-attention (CVT-RA) block of the Al or ML-based encoder that comprises an MHRA function (see Fig. 3, Fig. 5, para. [0034];[0058]-[0061] discloses a multi head attention mechanism (MHA); neural network including a multi head attention encoder that may include layers that perform functions, the input is the CSI for multiple UEs). Wu and Vahdat are considered analogous to the claimed invention because both are in the field of wireless communication methods and CSI compression. Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Wu to include the processing as described by Vahdat. The motivation to combine both references would come from improving CSI feedback operation. Response to Arguments Applicant's arguments filed 01/08/2026, pages 8-10 have been fully considered but they are not persuasive. Applicant amended claims 1, 10, 15 in an effort to distinguish the claims from the prior art. However, examiner respectfully disagrees. The amended claims recite “…implements at least one of convolutional projection, expandable kernels, and multihead re-attention (MHRA)…” (alternative language). Given the broadest reasonable interpretation, the limitation related to MHRA is optional in the amended independent claims 1, 10, 15. Therefore, the rejection is maintained. A new ground of rejection in view of Chen et al (US 20250055531 A1) is provided for claims 4, 5, 17, 18. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUIS GUILLERMO LEMA LEMOS whose telephone number is (571)-272-5710. The examiner can normally be reached M-F 8-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nishant Divecha, can be reached at 571-270-3125. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUIS GUILLERMO LEMA LEMOS/Examiner, Art Unit 2419 /Nishant Divecha/Supervisory Patent Examiner, Art Unit 2419
Read full office action

Prosecution Timeline

Sep 05, 2023
Application Filed
Oct 15, 2025
Non-Final Rejection — §103
Oct 23, 2025
Interview Requested
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Examiner Interview Summary
Jan 08, 2026
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
Grant Probability
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month