Prosecution Insights
Last updated: April 19, 2026
Application No. 18/668,336

LEARNING COMMUNICATION SYSTEMS USING CHANNEL APPROXIMATION

Non-Final OA §103§DP
Filed
May 20, 2024
Examiner
KAMARA, MOHAMED A
Art Unit
2412
Tech Center
2400 — Computer Networks
Assignee
Deepsig Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
933 granted / 1046 resolved
+31.2% vs TC avg
Moderate +9% lift
Without
With
+8.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
42 currently pending
Career history
1088
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1046 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office action is in response to Application filed on 05/20/2024. Claims 2-31 are pending. Claim 1 is canceled in a preliminary amendment. Claims 2-31 are newly added. Claims 2, 4-8, 10-13, 15-19, 21-24, 26-27, 29-31 are rejected. Claims 3, 9, 14, 20, 25, 28 are objected to for depending from rejected base claims. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 2-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim claims 1, 7, 16-17, 20 of Patent No. US 11991658 B2 (herein after “ O'Shea”) in view of Carl Munkberg et al (US 20180357537 A1). For Claim 2, Claim 1 of O'Shea teaches all of the claimed subject matter (see claim 1) with the exception of a channel machine-learning network comprising one or more variational layers using random sampling operations. However, Munkberg, in analogous art, discloses a channel machine-learning network comprising one or more variational layers using random sampling operations (Munkberg teaches in ¶ 0038 that By sampling the signal at random times, a neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data. Munkberg teaches in ¶ 0037 that the computation efficiency techniques may be directly applied to neural networks as most layers (such as fully connected and convolution layers) are implemented using matrix multiplications). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of O'Shea with the random sampling taught by Munkberg. The motivation is so that the neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data [Munkberg: ¶ 0038]. Regarding the remaining claims, the table below maps the claims in the instant applications to corresponding claims which have substantially the same limitations [up to and including limitations of parent and intervening claims] in Patent No. US 11991658 B2. Table 1: Claim Mapping to Patent No. US 11991658 B2 Claim # in Instant Application (18668336) Claim # in Patent No. US 11991658 B2 2. (New) A method performed by at least one processor to communicate over a communication channel, the method comprising: transmitting a first radio frequency (RF) signal through a first communication channel that implements a channel machine-learning network comprising one or more variational layers using random sampling operations, the first communication channel representing a model of a physical communication channel; and obtaining a second RF signal as an output of the first communication channel, the second RF signal corresponding to one of (i) the first RF signal altered by transmission through the channel machine-learning network, or (ii) the first RF signal processed by one or more of multiplicative operations or convolutional operations in the channel machine-learning network. A method performed by at least one processor to train at least one machine-learning network to communicate over a communication channel, the method comprising: transmitting input information through a first communication channel; obtaining first information as an output of the first communication channel; transmitting the input information through a second communication channel implementing a channel machine-learning network, the second communication channel representing a model of the first communication channel; obtaining second information as an output of the second communication channel; providing the first information or the second information to a discriminator machine-learning network as an input; obtaining an output of the discriminator machine-learning network; updating the channel machine-learning network using the output of the discriminator machine-learning network; and using the second communication channel implementing the updated channel machine-learning network to determine one or more performance metrics that represent an estimate of the performance of the first communication channel. 4 7 5 16 6 17 13 1 16 16 17 17 24 20 In view of the above, it is clear that the conflicting claims are not patentable distinct from each other because claim 2 of the instant Application merely broadens the scope of claim 1 of US 11991658 B2 by omitting limitations, such as updating the channel machine-learning network using the output of the discriminator machine-learning network; and using the second communication channel implementing the updated channel machine-learning network to determine one or more performance metrics that represent an estimate of the performance of the first communication channel. Claims 3, 7-12, 14-15, 18-23, 25-31 are rejected for depending from rejected base claims. Claims 2-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim claims 1, 5, 19, 25-26 of Patent No. US 11259260 B2 (herein after “ Timothy”). Although the conflicting claims are not identical, they are not patentably distinct from each other because Claims 1, 13, 24 of the instant application merely broaden the scope of claims 1, 25-26, respectively, of Patent No. US 11259260 B2 by omitting limitations, such as using the channel machine-learning network to predict a channel response; and using the channel response to determine a candidate location for at least one of a cell tower, an antenna, a remote radio head, or an antenna array. It has been held that the omission of an element and its function is an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA), also note Exparte Rainu, 168 USPQ 375 (Bd. App. 1969); the omission of a reference element whose function is not needed would be obvious to one skilled in the art. Regarding the remaining claims, the table below maps the claims in the instant applications to corresponding claims which have substantially the same limitations [up to and including limitations of parent and intervening claims] in Patent No. US 11259260 B2. Table 1: Claim Mapping to Patent No. US 11259260 B2 Claim # in Instant Application (18668336) Claim # in Patent No. US 11259260 B2 2. (New) A method performed by at least one processor to communicate over a communication channel, the method comprising: transmitting a first radio frequency (RF) signal through a first communication channel that implements a channel machine-learning network comprising one or more variational layers using random sampling operations, the first communication channel representing a model of a physical communication channel; and obtaining a second RF signal as an output of the first communication channel, the second RF signal corresponding to one of (i) the first RF signal altered by transmission through the channel machine-learning network, or (ii) the first RF signal processed by one or more of multiplicative operations or convolutional operations in the channel machine-learning network. (1+5) A method performed by at least one processor to train at least one machine-learning network to communicate over a communication channel, the method comprising: transmitting input information through a first communication channel; obtaining first information as an output of the first communication channel; transmitting the input information through a second communication channel implementing a channel machine-learning network, the second communication channel representing a model of the first communication channel; obtaining second information as an output of the second communication channel; providing the first information or the second information to a discriminator machine-learning network as an input; obtaining an output of the discriminator machine-learning network; updating the channel machine-learning network using the output of the discriminator machine-learning network; using the channel machine-learning network to predict a channel response; and using the channel response to determine a candidate location for at least one of a cell tower, an antenna, a remote radio head, or an antenna array. + 5. The method of claim 1, wherein the channel machine-learning network includes one or more variational layers or neurons containing a random sampling operation using at least one of inputs or weights to define a particular aspect of a probability distribution. 4 19 12 5 13 25 24 26 Claims 3, 5-11, 14-23, 25-31 are rejected for depending from rejected base claims. Claims 2-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim claims 1-3, 14-16, 19 of Patent No. US 10531415 B2 (herein after “Hillburn”) in view of Carl Munkberg et al (US 20180357537 A1). For Claim 2, Claim 1 of Hillburn teaches all of the claimed subject matter (see claim 1) with the exception of a channel machine-learning network comprising one or more variational layers using random sampling operations. However, Munkberg, in analogous art, discloses a channel machine-learning network comprising one or more variational layers using random sampling operations (Munkberg teaches in ¶ 0038 that By sampling the signal at random times, a neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data. Munkberg teaches in ¶ 0037 that the computation efficiency techniques may be directly applied to neural networks as most layers (such as fully connected and convolution layers) are implemented using matrix multiplications). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Hillburn with the random sampling taught by Munkberg. The motivation is so that the neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data [Munkberg: ¶ 0038]. Regarding the remaining claims, the table below maps the claims in the instant applications to corresponding claims which have substantially the same limitations [up to and including limitations of parent and intervening claims] in Patent No. US 10531415 B2. Table 1: Claim Mapping to Patent No. US 10531415 B2 Claim # in Instant Application (18668336) Claim # in Patent No. US 10531415 B2 2. (New) A method performed by at least one processor to communicate over a communication channel, the method comprising: transmitting a first radio frequency (RF) signal through a first communication channel that implements a channel machine-learning network comprising one or more variational layers using random sampling operations, the first communication channel representing a model of a physical communication channel; and obtaining a second RF signal as an output of the first communication channel, the second RF signal corresponding to one of (i) the first RF signal altered by transmission through the channel machine-learning network, or (ii) the first RF signal processed by one or more of multiplicative operations or convolutional operations in the channel machine-learning network. A method performed by at least one processor to train at least one machine-learning network to communicate over a communication channel, the method comprising: obtaining first information; using an encoder machine-learning network to process the first information and generate a first radio-frequency signal; transmitting the first radio-frequency signal through a first communication channel; determining a second radio-frequency signal that represents the first radio-frequency signal having been altered by transmission through the first communication channel; simulating transmission of the first radio-frequency signal over a second communication channel implementing a channel machine-learning network, the second communication channel representing a model of the first communication channel; determining a simulated radio-frequency signal that represents the first radio-frequency signal having been altered by simulated transmission through the second communication channel; calculating a first measure of distance between the second radio-frequency signal and the simulated radio-frequency signal; and updating the channel machine-learning network using the first measure of distance. 3 1 5 2 6 3 13 14 14 14 16 15 17 16 24 19 26 19 Claims 4, 7-12, 15, 18-23, 25, 27-31 are rejected for depending from rejected base claims. In view of the above, it is clear that the conflicting claims are not patentable distinct from each other because claim 2 of the instant Application merely broadens the scope of claim 1 of US 10531415 B2 by omitting limitations, such as calculating a first measure of distance between the second radio-frequency signal and the simulated radio-frequency signal; and updating the channel machine-learning network using the first measure of distance. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2, 4-8, 10-13, 15-19, 21-24, 26-27, 29-31 are rejected under 35 U.S.C. 103 as being unpatentable over Dörner et al (submitted in the IDS of June 10, 2024, “Dörner”) in view of Carl Munkberg et al (US 20180357537 A1). Regarding claim 2, Dörner discloses a method performed by at least one processor to communicate over a communication channel (see FIG. 3), the method comprising: transmitting input information through a first communication channel (transmitting a signal through a real channel, see FIG. 3 and first full ¶ on pg. 135); obtaining first information as an output of the first communication channel (recording the corresponding IQ samples, see FIG. 3 and first full ¶ on pg. 135); transmitting a first radio frequency (RF) signal through a first communication channel that implements a channel machine-learning network comprising one or more variational layers, the second communication channel representing a model of the first communication channel (transmitting a signal through a stochastic channel model, see FIG. 3 and last ¶ on pg. 134-first full ¶ on pg. 135); obtaining a second RF signal as an output of the first communication channel (training the stochastic channel model using the output of the stochastic channel model, see FIG. 3 and last ¶ on pg. 134-first full ¶ on pg. 135, second ¶ in § “IV. Results” on pg. 139); the second RF signal corresponding to one of (i) the first RF signal altered by transmission through the channel machine-learning network, or (ii) the first RF signal processed by one or more of multiplicative operations or convolutional operations in the channel machine-learning network (training using the stochastic channel model to approximate as closely as possible the behavior of the expected channel, see FIG. 3 and last ¶ on pg. 134-first full ¶ on pg. 135, second ¶ in § “IV. Results” on pg. 139; training using the stochastic channel model involves comparing the output of the stochastic channel model with the expected output and changing parameters of the stochastic channel model and repeating these steps). Dörner fails to expressly disclose a channel machine-learning network comprising one or more variational layers using random sampling operations. However, Munkberg, in analogous art, discloses a channel machine-learning network comprising one or more variational layers using random sampling operations (Munkberg teaches in ¶ 0038 that By sampling the signal at random times, a neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data. Munkberg teaches in ¶ 0037 that the computation efficiency techniques may be directly applied to neural networks as most layers (such as fully connected and convolution layers) are implemented using matrix multiplications). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Dörner with the random sampling taught by Munkberg. The motivation is so that the neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data [Munkberg: ¶ 0038]. Regarding claim 4, Dörner discloses a method, wherein determining one or more performance metrics of the communications system corresponding to the physical communication channel comprises: evaluating one or more of bit error rate (BER) (without finetuning, there is a gap of 2 dB at a BLER of 10-4, and this gap can be reduced to 1 dB through finetuning, see ¶ 2 on pg. 142), signal to noise ratio (SNR), transmission power consumption, throughput, latency, bandwidth, fading, interference, or distortion associated with signal transmission using the communications system. Regarding claim 5, Dörner discloses a method, further comprising: determining, based at least on the first RF signal and the second RF signal, one or more performance metrics corresponding to the physical communication channel; and in response to determining the one or more performance metrics, updating one or more parameters of an encoder machine learning network included in a communications transmitter or communications receiver used for communications over the physical communication channel (performing a supervised finetuning of the receiver using the IQ samples of the real channel and/or training the stochastic channel model using the output of the stochastic channel model, see FIG. 3 and last ¶ on pg. 134-first full ¶ on pg. 135, second ¶ in § “IV. Results” on pg. 139; in other words, performing supervised finetuning or training involves comparing the output to the expected output and finetuning [i.e., updating] the receiver or training [i.e., updating] the stochastic channel model based on the comparison). Regarding claim 6, Dörner discloses a method, wherein the encoder machine learning network represents a second encoder machine learning network included in the first communication channel transmitting input information through a first communication channel (autoencoder transmitting a signal through a real channel or a stochastic channel mode, see FIG. 3 and last ¶ on pg. 134). Regarding claim 7, Dörner discloses all of the claimed subject matter with the exception of updating at least one of weights or connectivity of one or more neural network layers of the channel machine-learning network. However, Munkberg, in analogous art, discloses updating at least one of weights or connectivity of one or more neural network layers of the channel machine-learning network (Munkberg teaches in ¶ 0035 that a second unit that performs backpropagation (updates the neural network weights based on the loss gradient)). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Dörner with the backpropagation taught by Munkberg. The motivation is so that the neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data [Munkberg: ¶ 0038]. Regarding claim 8, Dörner discloses a method, wherein determining one or more performance metrics corresponding to the physical communication channel comprises: evaluating one or more of bit error rate (BER), signal to noise ratio (SNR), transmission power consumption, throughput, latency, bandwidth, fading, interference, or distortion associated with signal transmission over the physical communication channel (without finetuning, there is a gap of 2 dB at a BLER of 10-4, and this gap can be reduced to 1 dB through finetuning, see ¶ 2 on pg. 142). Regarding claim 10, Dörner discloses a method, wherein the one or more variational layers represent channel impairments and effects corresponding to the physical communications channel (training using the stochastic channel model to approximate as closely as possible the behavior of the expected channel, see FIG. 3 and last ¶ on pg. 134-first full ¶ on pg. 135, second ¶ in § “IV. Results” on pg. 139; training using the stochastic channel model involves comparing the output of the stochastic channel model with the expected output and changing parameters of the stochastic channel model and repeating these steps). Regarding claim 11, Dörner discloses all of the claimed subject matter with the exception that one or more variational layers includes one or more of multiplications, divisions, or summations of inputs and intermediate values, followed by non-linearities arranged in one of a feed-forward manner or with feedback and in-layer connections. However, Munkberg, in analogous art, discloses that one or more variational layers includes one or more of multiplications, divisions, or summations of inputs and intermediate values (Munkberg teaches in ¶ 0037 that the computation efficiency techniques may be directly applied to neural networks as most layers (such as fully connected and convolution layers) are implemented using matrix multiplications), followed by non-linearities arranged in one of a feed-forward manner or with feedback and in-layer connections (Munkberg teaches in ¶ 0035 that a second unit that performs backpropagation (updates the neural network weights based on the loss gradient)). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Dörner with the backpropagation taught by Munkberg. The motivation is so that the neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data [Munkberg: ¶ 0038]. Regarding claim 12, Dörner discloses all of the claimed subject matter with the exception that the one or more variational layers perform the random sampling operations using at least one of inputs or weights corresponding to a particular aspect of a probability distribution. However, Munkberg, in analogous art, discloses that the one or more variational layers perform the random sampling operations using at least one of inputs or weights corresponding to a particular aspect of a probability distribution (Munkberg teaches in ¶ 0038 that By sampling the signal at random times, a neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data. Munkberg teaches in ¶ 0037 that the computation efficiency techniques may be directly applied to neural networks as most layers (such as fully connected and convolution layers) are implemented using matrix multiplications). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Dörner with the random sampling taught by Munkberg. The motivation is so that the neural network model can be effectively trained to perform up-sampling/signal prediction using a large set of sparse training data [Munkberg: ¶ 0038]. Regarding claim 13, please refer to the rejection of claim 2, above. Regarding claims 15-19, 21-23, please refer to the rejection of claims 4-8, 10-12, above. Regarding claim 24, please refer to the rejection of claim 2, above. Regarding claims 26-27, please refer to the rejection of claims 5,8 above. Regarding claims 29-31, please refer to the rejection of claims 10-12, above. Allowable Subject Matter Claims 3, 9, 14, 20, 25, 28 are objected to for depending from rejected base claims, but would be allowable if rewritten to overcome the nonstatutory double patenting rejection(s), set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Reider et al (US 20200186227 A1) teaches a system (100) comprising a training module (104) coupled to the measurement module (102), wherein the training module (104) is configured to generate a machine learning model based on the channel quality measurements. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED A KAMARA whose telephone number is (571)270-5629. The examiner can normally be reached M-F 9AM-4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CHARLES JIANG can be reached on 5712707191. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED A KAMARA/Primary Examiner, Art Unit 2412
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Nov 05, 2024
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604250
CLI REPORTING FOR HANDOVER
2y 5m to grant Granted Apr 14, 2026
Patent 12581342
MDT METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12581356
Multi-Link Device Load Signaling and Use in WLAN
2y 5m to grant Granted Mar 17, 2026
Patent 12581385
REPEATER HANDOVER DECISION BASED ON END-TO-END LINK QUALITY
2y 5m to grant Granted Mar 17, 2026
Patent 12581477
DATA TRANSMISSION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
98%
With Interview (+8.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 1046 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month