Prosecution Insights
Last updated: April 19, 2026
Application No. 17/653,442

SUPPORT OF ARTIFICIAL INTELLIGENCE OR MACHINE LEARNING TECHNIQUES FOR CHANNEL ESTIMATION AND MOBILITY ENHANCEMENTS

Non-Final OA §103
Filed
Mar 03, 2022
Examiner
SEN, ANINDITA
Art Unit
2478
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Non-Final)
83%
Grant Probability
Favorable
4-5
OA Rounds
2y 11m
To Grant
87%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
62 granted / 75 resolved
+24.7% vs TC avg
Minimal +4% lift
Without
With
+3.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
51 currently pending
Career history
126
Total Applications
across all art units

Statute-Specific Performance

§101
0.3%
-39.7% vs TC avg
§103
78.5%
+38.5% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
1.9%
-38.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 75 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is Kovacs ng examined under the first inventor to file provisions of the AIA . RESPONSE TO ARGUMENTS The applicant remarks dated 9/5/2025 have been considered and new non final issued. Allowable Subject Matter Claims 4,5,6,11,12,13,18,19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,3,8,10,15,17 are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al.( US20250203403A1) ( hereinafter “Pez”) in view of Wang et al. (US20210342687A1) Regarding Claim 1, Pez teaches, A user equipment (UE), comprising: Fig7 BS sends configuration to UE. a transceiver configured to receive, from a base station, machine learning/artificial intelligence (ML/AI) configuration information, the ML/AI configuration information including an indication of whether to use an ML/AI approach for an operation and [24]- method for wireless communications by a network entity. The method may include at least one of sending to a UE or receiving from the UE, one or more machine learning models and an indication of one or more TRPs for which the one or more machine learning models are applicable (=an indication of whether to use an ML/AI approach for an operation). generate, based on performing the operation, UE assistance information related to updating the one or more ML models or the model parameters.[144] the network entity may receive a notification (e.g., a first notification) (=UE assistance information related to updating the one or more ML models )from the UE to switch to a second machine learning model that is applicable for a second TRP based on the indication when the UE moves from the first TRP to a second TRP. wherein the transceiver is further configured to transmit the UE assistance information for updating the one or more ML models or the model parameters.[144-145]- FIG. 11 illustrates a call flow diagram illustrating example signaling between UEs (such as a UE 1 and a UE 2) and a BS for machine learning model training, sharing, and updating. Pez does not teach, an index and a processor operatively coupled to the transceiver, the processor configured to: determine, based on the index, (i) the operation and (ii)one or more ML models and model parameters for the operation. Wang teaches, an index and a processor operatively coupled to the transceiver, the processor configured to: determine, based on the index, (i) the operation and (ii)one or more ML models and model parameters for the operation.Fig 17 [196]- For example, with reference to the environment 1600-2 of FIG. 16-1, the base station transmits each index value(=index) associated with the set of candidate neural network formation configurations to the UE [198]- At 1720, the UE 110 selects a candidate neural network formation configuration of the set of candidate neural network formation configurations (=determine, based on the index) UE selects the candidate neural network formation configuration that forms the respective deep neural network that decodes the expected data pattern with the least bit errors relative to other candidate neural network formation configurations(=(i) the operation) At 1725, the UE 110 forms a deep neural network using the selected candidate neural network formation configuration (=one or more ML models and model parameters for the operation). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pez , an index and a processor operatively coupled to the transceiver, the processor configured to: determine, based on the index, (i) the operation and (ii)one or more ML models and model parameters for the operation as taught by Wang to use model and parameters to fine tune training data for UE. Regarding Claim 3, Pez does not teach, The UE of claim 1, wherein the transmitted UE assistance information includes local data at the UE and the local data comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters Wang teaches, The UE of claim 1, wherein the transmitted UE assistance information includes local data at the UE and the local data comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters. [118]- for a scenario in which a UE moves to a lower power state. It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pez , The UE of claim 1, wherein the transmitted UE assistance information includes local data at the UE and the local data comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters as taught by Wang to use model and parameters to fine tune training data for UE. Regarding Claim 8, Pez teaches, A method, comprising: receiving, at a user equipment (UE) from a base station, machine learning/artificial intelligence (ML/AI) configuration information an ML/AI approach for an operation and [24]- method for wireless communications by a network entity. The method may include at least one of sending to a UE or receiving from the UE, one or more machine learning models and an indication of one or more TRPs for which the one or more machine learning models are applicable (=an indication of whether to use an ML/AI approach for an operation). generating, at the UE based on performing the operation, UE assistance information related to updating the one or more ML models or the model parameters[144] the network entity may receive a notification (e.g., a first notification) (=UE assistance information related to updating the one or more ML models )from the UE to switch to a second machine learning model that is applicable for a second TRP based on the indication when the UE moves from the first TRP to a second TRP. and transmitting, from the UE to the base station, the UE assistance information for updating the one or more ML models or the model parameters.[144-145]- FIG. 11 illustrates a call flow diagram illustrating example signaling between UEs (such as a UE 1 and a UE 2) and a BS for machine learning model training, sharing, and updating. Pez does not teach, an index determining, based on the index, (i) the operation and (ii) one or more ML models and model parameters for the operation; Wang teaches, an index determining, based on the index, (i) the operation and (ii) one or more ML models and model parameters for the operation; Fig 17 [196]- For example, with reference to the environment 1600-2 of FIG. 16-1, the base station transmits each index value(=index) associated with the set of candidate neural network formation configurations to the UE [198]- At 1720, the UE 110 selects a candidate neural network formation configuration of the set of candidate neural network formation configurations (=determine, based on the index) UE selects the candidate neural network formation configuration that forms the respective deep neural network that decodes the expected data pattern with the least bit errors relative to other candidate neural network formation configurations(=(i) the operation) At 1725, the UE 110 forms a deep neural network using the selected candidate neural network formation configuration (=one or more ML models and model parameters for the operation). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pez, an index determining, based on the index, (i) the operation and (ii) one or more ML models and model parameters for the operation as taught by Wang to use model and parameters to fine tune training data for UE. Regarding Claim 10, Pez does not teach, The method of claim 8, wherein the transmitted UE assistance information includes local data at the UE and the local data comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters. Wang teaches, The method of claim 8, wherein the transmitted UE assistance information includes local data at the UE and the local data comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters. [118]- for a scenario in which a UE moves to a lower power state. It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pez , The method of claim 8, wherein the transmitted UE assistance information includes local data at the UE and the local data comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters as taught by Wang to use model and parameters to fine tune training data for UE. Regarding Claim 15, Pez teaches, A base station (BS), comprising: a processor configured to generate machine learning/artificial intelligence (ML/AI) configuration information the ML/AI configuration information including: an indication of whether to use an ML/AI approach for an operation [24]- method for wireless communications by a network entity. The method may include at least one of sending to a UE or receiving from the UE, one or more machine learning models and an indication of one or more TRPs for which the one or more machine learning models are applicable (=an indication of whether to use an ML/AI approach for an operation). UE assistance information for updating the one or more ML models or the model parameters, the UE assistance information generated based on the operation. [144] the network entity may receive a notification (e.g., a first notification) (=UE assistance information related to updating the one or more ML models )from the UE to switch to a second machine learning model that is applicable for a second TRP based on the indication when the UE moves from the first TRP to a second TRP. and a transceiver operably coupled to the processor and configured to: transmit the ML/AI configuration information to a user equipment (UE}, and receive, from the UE, [144-145]- FIG. 11 illustrates a call flow diagram illustrating example signaling between UEs (such as a UE 1 and a UE 2) and a BS for machine learning model training, sharing, and updating. Pez does not teach, And an index that indicates (i) the operation and (ii) one or more ML models and model parameters for the operation. Wang teaches, And an index that indicates (i) the operation and (ii) one or more ML models and model parameters for the operation. Fig 17 [196]- For example, with reference to the environment 1600-2 of FIG. 16-1, the base station transmits each index value(=index) associated with the set of candidate neural network formation configurations to the UE [198]- At 1720, the UE 110 selects a candidate neural network formation configuration of the set of candidate neural network formation configurations (=determine, based on the index) UE selects the candidate neural network formation configuration that forms the respective deep neural network that decodes the expected data pattern with the least bit errors relative to other candidate neural network formation configurations(=(i) the operation) At 1725, the UE 110 forms a deep neural network using the selected candidate neural network formation configuration (=one or more ML models and model parameters for the operation). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pez And an index that indicates (i) the operation and (ii) one or more ML models and model parameters for the operation as taught by Wang to use model and parameters to fine tune training data for UE. Regarding Claim 17, Pez does not teach, The BS of claim 15, the received UE assistance information includes local data at the UE, and the local data at the UE comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters. Wang teaches, The BS of claim 15, the received UE assistance information includes local data at the UE, and the local data at the UE comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters. [118]- for a scenario in which a UE moves to a lower power state. It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Pez The BS of claim 15, the received UE assistance information includes local data at the UE, and the local data at the UE comprises one or more of UE location, UE trajectory/mobility, UE speed, UE orientation, UE battery level, estimated delay and Doppler spread, experienced error rate, experienced quality of service, estimated channel status, inference results, or updated model parameters as taught by Wang to use model and parameters to fine tune training data for UE. Claim(s) 2,9,16 are rejected under 35 U.S.C. 103 as unpatentable over Pezeshki et al.( US20250203403A1) ( hereinafter “Pez”) in view of Wang et al. (US20210342687A1) in further view of Ottersten et al. (US20210345134A1) ( hereinafter “Johan”) Regarding Claim 2, Wang teaches, ML model parameter updates based on local training for updating the one or more ML models or the model parameters for the UL channel prediction, or local data at the UE. [197]- The UE then analyzes each respective deep neural network, such as by analyzing the outputs of each respective deep neural network and generating respective metric(s) (e.g., accuracy, bit errors, etc.). Paz in view of Wang does not teach, The UE of claim 1, wherein the UE assistance information for updating the one or more ML models or the model parameters is for uplink UL channel prediction and includes one or more of a UE inference on predicted UL channel status, a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted channel status. Johan teaches, The UE of claim 1, wherein the UE assistance information for updating the one or more ML models r the model parameters is for the UL channel prediction includes one or more of a UE inference on predicted UL channel status, Fig 9,[240]- Thirdly, the predictions are sent to the UE, e.g. the wireless device 120, 122, and the wireless device applies them to the link( another example of whether ML model parameters for the UL channel prediction are used). a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted channel status, [ 0143 ] For example , the network node 110 , 111 , 120 , 122 , 130 may update the parameters of the machine learning model , e.g. update one or more weights in a neural network , after evaluating that the MCS selection is too conservative and thus not fully utilizes the channel(=whether ML model parameters for the UL channel prediction are used). Fig 9,[240]- Thirdly, the predictions are sent to the UE, e.g. the wireless device 120, 122, and the wireless device applies them to the link( another example of whether ML model parameters for the UL channel prediction are used). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Paz in view of Wang, The UE of claim 1, wherein the UE assistance information for updating the one or more ML models for the UL channel prediction includes one or more of a UE inference on predicted UL channel status, a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted channel status. as taught by Johan to use model and parameters to fine tune training data for UE. Regarding Claim 9, Wang teaches, ML model parameter updates based on local training for updating the one or more ML models or the model parameters for the UL channel prediction, or local data at the UE. [197]- The UE then analyzes each respective deep neural network, such as by analyzing the outputs of each respective deep neural network and generating respective metric(s) (e.g., accuracy, bit errors, etc.). Paz in view of Wang does not teach, The method of claim 8, wherein the UE assistance information for updating the one or more ML models or the model parameters is for uplink(UL) channel prediction and includes one or more of: a UE inference on a predicted UL channel status, a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted UL channel status, Johan teaches, The method of claim 8, wherein the UE assistance information for updating the one or more ML models or the model parameters is for uplink(UL} channel prediction and includes one or more of: a UE inference on a predicted UL channel status, Fig 9,[240]- Thirdly, the predictions are sent to the UE, e.g. the wireless device 120, 122, and the wireless device applies them to the link( another example of whether ML model parameters for the UL channel prediction are used). a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted UL channel status, [ 0143 ] For example , the network node 110 , 111 , 120 , 122 , 130 may update the parameters of the machine learning model , e.g. update one or more weights in a neural network , after evaluating that the MCS selection is too conservative and thus not fully utilizes the channel(=whether ML model parameters for the UL channel prediction are used). Fig 9,[240]- Thirdly, the predictions are sent to the UE, e.g. the wireless device 120, 122, and the wireless device applies them to the link( another example of whether ML model parameters for the UL channel prediction are used). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Paz in view of Wang, The method of claim 8, wherein the UE assistance information for updating the one or more ML models for the UL channel prediction includes one or more of a UE inference on predicted UL channel status, a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted channel status as taught by Johan to use model and parameters to fine tune training data for UE. Regarding Claim 16, Wang teaches, ML model parameter updates based on local training for updating the one or more ML models or the model parameters for the UL channel prediction, or local data at the UE. [197]- The UE then analyzes each respective deep neural network, such as by analyzing the outputs of each respective deep neural network and generating respective metric(s) (e.g., accuracy, bit errors, etc.). Paz in view of Wang does not teach, The BS of claim 15, wherein the UE assistance information for updating the one or more ML models or the model parameters is for uplink(UL) channel prediction and includes one or more of a UE inference on a predicted UL channel status, a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted UL channel status,. Johan teaches, The BS of claim 15, wherein the UE assistance information for updating the one or more ML models for the UL channel prediction includes one or more of a UE inference on predicted UL channel status, Fig 9,[240]- Thirdly, the predictions are sent to the UE, e.g. the wireless device 120, 122, and the wireless device applies them to the link( another example of whether ML model parameters for the UL channel prediction are used). a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted channel status, [ 0143 ] For example , the network node 110 , 111 , 120 , 122 , 130 may update the parameters of the machine learning model , e.g. update one or more weights in a neural network , after evaluating that the MCS selection is too conservative and thus not fully utilizes the channel(=whether ML model parameters for the UL channel prediction are used). Fig 9,[240]- Thirdly, the predictions are sent to the UE, e.g. the wireless device 120, 122, and the wireless device applies them to the link( another example of whether ML model parameters for the UL channel prediction are used). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Paz in view of Wang, The BS of claim 15, wherein the UE assistance information for updating the one or more ML models for the UL channel prediction includes one or more of a UE inference on predicted UL channel status, a modulation and coding scheme (MCS) index of an MCS autonomously selected by the UE for a following UL transmission based on the predicted channel status as taught by Johan to use model and parameters to fine tune training data for UE. Claim(s) 7,14,20 are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki (US20250203403A1) et al. ( hereinafter “Pez”) in view of Wang et al. (US20210342687A1) in further view of Jung et al. (US20230319656A1) Regarding Claim 7, Paz in view of Wang does not teach, The UE of claim 1, wherein the ML/AI approach is for cell selection/reselection parameters, and wherein one of the UE assistance information relating to cell selection/reselection is configured to be reported while the UE is in one or inactive mode of idle mode using a medium access control (MAC) control element (CE), or a timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection. Jung teaches, The UE of claim 1, wherein the ML/AI approach is for cell selection/reselection, and wherein one of [51]- FIG. 13 is an exemplary diagram illustrating a procedure related to the use of an AI receiver between a UE and a BS related to cell selection according to the present disclosure. a timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection.[279] The UE measures performance of the AI receiver within a period of an AI receiver evaluation timer given by the BS to check whether the performance of the receiver has deteriorated (S1801). The timer period may be configured when the BS transmits AI-related information. A parameter indicating receiver performance includes a block error rate (BLER). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Paz in view of Wang and Johan, The UE of claim 1, wherein the ML/AI approach is for cell selection/reselection, and wherein one of a timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection as taught by Jung to use model and parameters to fine tune training data for UE. Regarding Claim 14, Paz in view of Wang does not teach, The method of claim 8, wherein the ML/AI approach is for cell selection/reselection parameters, and wherein one of the UE assistance information relating to cell selection/reselection is configured to be reported while the UE is in one or inactive mode or idle mode using a medium access control (MAC) control element (CE), or a timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection. Jung teaches, The method of claim 8, wherein ML/AI approach is for cell selection/reselection parameters, and wherein one of the [51]- FIG. 13 is an exemplary diagram illustrating a procedure related to the use of an AI receiver between a UE and a BS related to cell selection according to the present disclosure. timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection. [279]The UE measures performance of the AI receiver within a period of an AI receiver evaluation timer given by the BS to check whether the performance of the receiver has deteriorated (S1801). The timer period may be configured when the BS transmits AI-related information. A parameter indicating receiver performance includes a block error rate (BLER). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Paz in view of Wang , The method of claim 8, wherein the ML/AI approach is for cell selection/reselection parameters, and wherein one of the timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection as taught by Jung to use model and parameters to fine tune training data for UE. Regarding Claim 20, Paz in view of Wang does not teach, The BS of claim 15, wherein the ML/AI approach is for cell selection/reselection parameters, and wherein one of the UE assistance information relating to cell selection/reselection is configured to be reported while the UE is in one of inactive mode or idle mode using a medium access control (MAC) control element (CE), or a timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection. Jung teaches, The BS of claim 15, wherein the ML/AI approach is for cell selection/reselection parameters, and wherein one of the [51]- FIG. 13 is an exemplary diagram illustrating a procedure related to the use of an AI receiver between a UE and a BS related to cell selection according to the present disclosure. a timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection[279]The UE measures performance of the AI receiver within a period of an AI receiver evaluation timer given by the BS to check whether the performance of the receiver has deteriorated (S1801). The timer period may be configured when the BS transmits AI-related information. A parameter indicating receiver performance includes a block error rate (BLER). It would have been obvious to a person having an ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Paz in view of Wang and Jung, The BS of claim 15, wherein the ML/AI approach is for cell selection/reselection parameters, and wherein one of the a timer restricts a frequency of reporting of the UE assistance information relating to cell selection/reselection as taught by Jung to use model and parameters to fine tune training data for UE. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Anindita Sen whose telephone number is (571)-272-2390. The examiner can normally be reached 7:30am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Avellino can be reached on (571)-272-3905. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANINDITA SEN/Examiner, Art Unit 2478 /JOSEPH E AVELLINO/Supervisory Patent Examiner, Art Unit 2478
Read full office action

Prosecution Timeline

Mar 03, 2022
Application Filed
Mar 22, 2024
Non-Final Rejection — §103
Jun 26, 2024
Response Filed
Nov 20, 2024
Non-Final Rejection — §103
Feb 25, 2025
Response Filed
Jun 04, 2025
Final Rejection — §103
Sep 05, 2025
Response after Non-Final Action
Sep 05, 2025
Notice of Allowance
Oct 28, 2025
Response after Non-Final Action
Jan 31, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547166
METHOD AND SYSTEM FOR TELEOPERATIONS AND SUPPORT SERVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12543075
METHOD AND MULTI SIM UE FOR MANAGING DATA SESSION IN WIRELESS NETWORK
2y 5m to grant Granted Feb 03, 2026
Patent 12526771
OFDMA TRIGGER BASED PEER TO PEER OPERATIONS WITH DUAL-STAGE TRIGGERING
2y 5m to grant Granted Jan 13, 2026
Patent 12526789
COMMUNICATION METHOD AND APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12490205
METHODS AND DEVICES OF INFORMATION TRANSMISSION AND INFORMATION RECEPTION, AND TERMINAL
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
83%
Grant Probability
87%
With Interview (+3.9%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 75 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month