Prosecution Insights
Last updated: April 19, 2026
Application No. 18/327,776

DRIVE TEST BASED NETWORK PERFORMANCE ESTIMATION

Final Rejection §102§103§112
Filed
Jun 01, 2023
Examiner
DEDITCH, AARON CLYDE
Art Unit
2642
Tech Center
2600 — Communications
Assignee
DISH NETWORK L.L.C.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
8 granted / 11 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
12 currently pending
Career history
23
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
56.0%
+16.0% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
31.9%
-8.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 11 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The Replacement Drawings (including the Original Drawings) do not comply with 37 CFR 1.121(d) since they do not comply with 37 C.F.R. § 1.84(l). Thus, a new corrected drawing for Figure 3 in compliance with 37 CFR 1.121(d) is required in this application because the Replacement Drawing for Figure 3 does not comply with 37 C.F.R. § 1.84(l), which reads as follows: (l) Character of lines, numbers, and letters. All drawings must be made by a process which will give them satisfactory reproduction characteristics. Every line, number, and letter must be durable, clean, black (except for color drawings), sufficiently dense and dark, and uniformly thick and well-defined. The weight of all lines and letters must be heavy enough to permit adequate reproduction. This requirement applies to all lines however fine, to shading, and to lines representing cut surfaces in sectional views. Lines and strokes of different thicknesses may be used in the same drawing where different thicknesses have a different meaning. Applicant is again advised to employ the services of a competent patent draftsperson outside the Office, as the U.S. Patent and Trademark Office no longer prepares new drawings. The corrected drawings are required in reply to the Office action to avoid abandonment of the application. The requirement for corrected drawings will not be held in abeyance. Claim Objections Claims 3, 4, 6, 13, 14, and 16 are again objected to for various informalities, since they have not been addressed in the present response, as follows: Claim 3, line 1, is objected to because “wherein determining the drive test” should read “wherein the determining of the drive test”. Claim 4, line 1, is objected to because “wherein determining the drive test” should read “wherein the determining of the drive test”. Claim 6, line 1, is objected to because “wherein causing performance” should read “wherein the causing of performance”. Claim 13, line 1, is objected to because “wherein determining the drive test” should read “wherein the determining of the drive test”. Claim 14, line 1, is objected to because “wherein determining the drive test” should read “wherein the determining of the drive test”. Claim 16, line 1, is objected to because “wherein causing performance” should read “wherein the causing of performance”. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-20 are rejected under 35 U.S.C. § 112(a), as not containing a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same. As to claim 1, as amended, it includes the limitations in which “before the communications network is deployed, generating predicted network performance metrics based on the analyzing of the drive test data, the predicted network performance metrics comprising a prediction of the performance of the communications network after it is deployed”. That is, [A] “generating predicted network performance metrics based on the analyzing of the drive test data” “before the communications network is deployed”, in which the [B] “predicted network performance metrics compris[es] a prediction of the performance of the communications network after it is deployed”. Clauses [A] and [B] appear inconsistent (and indefinite) on their face, since they refer both to predicted performance before deployment and predicted performance after deployment. In this regard, it is noted that the Applicant’s response (including at page 9) identified no proper definite support (or any support) for this limitation. In fact, no proper definite support is found for this limitation in the specification, including paragraphs [0001], [0002], [0004], [0029], [0030], [0031], and [0036]”—as cited in the Amendment by the applicants. It is noted that paragraphs [0029] and [0036] only refer to the following: [0029] FIG. 2 is a flow diagram depicting an example process 200 for estimating performance of a communications network (e.g., a 5G network as described above) in accordance with some embodiments of the techniques described herein. In various embodiments, the process 200 is performed in real time and based on drive tests using, e.g., drive test devices 124 a-124 c. In various embodiments, the drive tests performed herein are not responsive to network failures or issues that are reported by users of a live network or otherwise detected by a live network, are not implemented to verify or assess network coverage, capacity, quality, or performance after deployment of a new network or its service, function, upgrade, or configuration, are not directed to benchmark, optimize, or troubleshoot a network with live network communication traffic generated by subscribed users; rather, the drive tests are used as a basis for emulating, predicting, or otherwise estimating performance of a network prior to live traffic of users. Illustratively, at least some part of the process 200 can be implemented by the performance estimation service 102, or one or more drive test devices 124 a-124 c of FIG. 1. . . . . [0036] At block 208, the process 200 includes generating emulated, predicted, or otherwise estimated network performance metrics based on the analysis of the drive test data. In some embodiments, the metrics are generated in real-time, which emulate, predict, or otherwise estimate KPIs that are computed based on live traffic generated by users on the communications network (e.g., after network deployment). In some embodiments, the process 200 includes classifying failures of the communications network based on the network performance metrics, prior to live traffic being generated by users on the communications network. Illustratively, classifying such failures can include creating scenarios to simulate the failures based on root cause analysis of one or more errors using the network performance metrics. In some embodiments, potential remedial actions with respect to one or more of the failures can be identified. Claim 1 is therefore rejected under 35 U.S.C. § 112(a), as not containing a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same. Claims 2 to 10 depend from claim 1, as amended, and they are therefore rejected for the same reasons as claim 1, as amended. As to claim 11, as amended, it recites the same deficient clause as claim 1, so that claim 11, as amended, is rejected under 35 U.S.C. § 112(a) for the same reasons as claim 1, as amended. Claims 12 to 17 depend from claim 11, as amended, and they are therefore rejected for the same reasons as claim 1, as amended. As to claim 18, as amended, it recites the same deficient clause as claim 1, so that claim 18, as amended, is rejected under 35 U.S.C. § 112(a) for the same reasons as claim 1, as amended. Claims 19 to 20 depend from claim 18, as amended, and they are therefore rejected for the same reasons as claim 1, as amended. Finally, the specification is objected to for lack of antecedent basis for the claimed subject matter as to each of claims 1, 11 and 18, as summarized above as to the specification. Claim Rejections ‐ 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre‐AIA 35 U.S.C. § 102 and § 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre‐AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 to 3, 7, 11 to 13, 17, 19, and 20 are rejected under 35 U.S.C. § 102(a)(1) as anticipated by Shakir et al., Key performance indicators analysis for 4 G-LTE cellular networks based on real measurements (March 2023) (“the Shakir reference”), for the following reasons. Claim 1 is directed to a “computer-implemented method for estimating performance of a communications network”. In this regard, the Shakir reference discloses (at page 4, Section 3) a computer- implemented method (“The data were gathered from 377 serving cells and processed and analyzed by SPSS, MATLAB, and Python to obtain the results”). Still further, the Abstract discloses the following: Key Performance Indicator (KPI) gives potential information that needs for successful network deploying, performance study, and enhanced networks. . . . [T]his work demonstrates Long Term-Evolution (LTE) data measurements and performance analysis of the KPI at the 2100MHz frequency band via a bandwidth of 20MHz for three mobile operators in Iraq, including Najaf city called (OP1, OP2, and OP3) for data confidentiality. Data collection is done by drive tests, from routes to characterize the cellular network. The data measurements have been focused on the parameters that affect the network directly, such as Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), Signal to Interference and noise Ratio (SINR), Received Signal Strength Indicator (RSSI), and Downlink Throughput (DL Throughput). Studying the analysis behavior of these parameters and the probability distribution function (PDF) for the KPIs demonstrates the relationship and dependency among these parameters for the three mentioned operators. Finally, these KPIs provide useful information for network management, assessment, and suitable requirements for Cellular network operators for voice and data services. Accordingly, the Shakir reference discloses or at least suggests the limitation of a “computer-implemented method for estimating performance of a communications network”. Claim 1 recites the further limitation of “obtaining a proposed entire coverage of the communications network”, “prior to deployment of the communications network”. In this regard, the Shakir reference discloses (page 2, section 1, left hand column) (“These KPIs data that influence the coverage and capacity of the network”; “analysis of the 4 G-LTE cellular network is studied based on KPIs reported from the driving test from . . . cellular network operators”; “It analyzes the 4 G-LTE network and gives the theoretical background for the coverage of LTE networks”). Also, Figure 1 of Section 1 of the Shakir reference refers to the KPIs of “radio network performance”. Still further, the Shakir reference (page 8, Section 4) discloses that the “study may include the coverage area”. Finally, as explained above, the Abstract of the Shakir reference specifically discloses that: “Key Performance Indicator (KPI) gives potential information that [is] need[ed] for successful network deploying”—that is prior to or before deployment of the communication network. Accordingly, based on the foregoing, the Shakir reference discloses or at least suggests the further limitation of “obtaining a proposed entire coverage of the communications network”, “prior to deployment of the communications network”. Claim 1 recites the further limitations of “determining a drive test pattern including a plurality of routes commensurate with the proposed entire coverage of the communications network”, “causing performance of a plurality of drive tests in accordance with the drive test pattern”, and “in real-time with data capturing by one or more of the plurality of drive tests”. In this regard, the Shakir reference discloses (in the Abstract) that “[d]ata collection is done by drive tests, from routes to characterize the cellular network”. Still further, the Shakir reference discloses (at section 3) Figures 3 and 4, which show a plurality of pathways and routes—that is a plurality of drive test patterns—on which a vehicle is driven to collect data. Accordingly, the Shakir reference discloses or at least suggests the further limitations of “determining a drive test pattern including a plurality of routes commensurate with the proposed entire coverage of the communications network”, “causing performance of a plurality of drive tests in accordance with the drive test pattern”, and “in real-time with data capturing by one or more of the plurality of drive tests”. Claim 1 recites the further limitation of “analyzing drive test data from the plurality of drive tests”. That is, drive test data from the drive tests are analyzed (to generate emulated (or simulated) network performance metrics—such as, Key Performance Indicators (KPIs)). As regards emulated network performance metrics, the instant specification (at paragraph [0006]) discloses that: “the emulated network performance metrics emulate KPIs that are computed based on live traffic generated by users on the communications network”. Also, in this regard, the Shakir reference discloses (at page 1351, section 3, 1st full paragraph, left-hand column; page 1354, section 4, 1st paragraph, left-hand column) the following: Here is the description of the analyzed [(that is, emulated or simulated)] KPIs that are generated from DT [(drive tests)]. Figure 3 shows the samples of the pathway for the strength of RSRP for OP1 [(OPerator 1)], OP2 [(OPerator 2)], and OP3 [(OPerator 3)], respectively. In addition, Fig. 4 show[s] the pathway samples for the strength of RSRQ for OP1, OP2, and OP3,respectively. . . . . [S]tudy performance for each operator in a different city or at a different time (such as morning and night, dynamic and static download, and crowded and non-crowded) for signal repeatability. Also, the study may include the coverage area using the RF propagation models depending on the frequency band. Also, we are planning to study frequency extrapolation for this area and compare it with other cities. . . . Accordingly, the Shakir reference discloses or at least suggests the limitation of “analyzing drive test data from the plurality of drive tests”. Finally, claim 1, as amended, recites the limitation of [A] “before the communications network is deployed”, [B] “generating predicted network performance metrics based on the analyzing of the drive test data”, [C] the “predicted network performance metrics comprising a prediction of the performance of the communications network after it is deployed”. As to clause [A], this clause simply repeats the second clause of claim 1, which is disclosed by the Shakir reference as explained above so that clause [A] is disclosed for the same reasons. In particular, as explained above, the Abstract of the Shakir reference specifically discloses that: “Key Performance Indicator (KPI) gives potential information that [is] need[ed] for successful network deploying”—that is prior to or before deployment of the communication network. As to clause [B], it is first noted that paragraph [0036] of the instant specification states that the “process 200 includes generating emulated, predicted, or otherwise estimated network performance metrics based on the analysis of the drive test data”, so that emulate and predict are both estimates. In this regard, the Shakir reference discloses (at page 1351, section 3, 1st full paragraph, left-hand column; page 1354, section 4, 1st paragraph, left-hand column) the following: Here is the description of the analyzed [(that is, emulated, predicted or simulated)] KPIs that are generated from DT [(drive tests)]. Figure 3 shows the samples of the pathway for the strength of RSRP for OP1 [(OPerator 1)], OP2 [(OPerator 2)], and OP3 [(OPerator 3)], respectively. In addition, Fig. 4 show[s] the pathway samples for the strength of RSRQ for OP1, OP2, and OP3,respectively. . . . . [S]tudy performance for each operator in a different city or at a different time (such as morning and night, dynamic and static download, and crowded and non-crowded) for signal repeatability. Also, the study may include the coverage area using the RF propagation models depending on the frequency band. Also, we are planning to study frequency extrapolation for this area and compare it with other cities. . . . Accordingly, the Shakir reference discloses or at least suggests the limitation of “generating predicted” [(that is, emulated or estimated)] “network performance metrics for the communications network based on the analyzing of the drive test data”. As to clause [C], “predicted network performance metrics” necessarily include a “prediction of the performance of the communications network”. As to the term-phrase “after it is deployed”, the Abstract and Section 2 (at page 1348, section 2, 2nd paragraph)) of the Shakir reference discloses that these computed (estimated or predicted) “KPIs provide useful information for network management, assessment, and suitable requirements for Cellular network operators for voice and data services”, and that KPIs “can be utilized for monitoring and optimizing the cellular network to afford the QoS for the wireless network”. Network management and assessment monitoring of QoS (Quality of Service) are done for a communication network after it is deployed. In short, the Shakir reference discloses or at least suggests the limitations of [A] “before the communications network is deployed”, [B] “generating predicted network performance metrics based on the analyzing of the drive test data”, [C] the “predicted network performance metrics comprising a prediction of the performance of the communications network after it is deployed”, as in claim 1 as amended. Claim 1 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference. Claim 2 depends from claim 1, and it recites the further limitation in which the “proposed entire coverage of the communications network includes an entirety of proposed geographic regions to be served by the communications network upon its deployment”. In this regard, the Abstract of the Shakir reference discloses that “[d]ata collection is done by drive tests, from routes to characterize the cellular network”. Still further, the Shakir reference discloses (at section 3) Figures 3 and 4, which show a plurality of pathways and routes—that is a plurality of drive test patterns—on which a vehicle is driven to collect data. Still further, the Shakir reference discloses (at page 2, section 1) (“These KPIs data that influence the coverage and capacity of the network”; “analysis of the 4 G-LTE cellular network is studied based on KPIs reported from the driving test from . . . cellular network operators”; “It analyzes the 4 G-LTE network and gives the theoretical background for the coverage of LTE networks”). Also, Figure 1 of Section 1 of the Shakir reference refers to the KPIs of “radio network performance”, and the Shakir reference (at page 8, Section 4) discloses that the “study may include the coverage area”. Finally, as explained above, the Abstract of the Shakir reference discloses that: “Key Performance Indicator (KPI) gives potential information that [is] need[ed] for successful network deploying, performance study”. Thus, in view of the foregoing, the Shakir reference discloses or at least suggests the further limitation in which the “proposed entire coverage of the communications network includes an entirety of proposed geographic regions to be served by the communications network upon its deployment”. Claim 2 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference for the same reasons as claim 1, and for the foregoing further reasons. Claim 3 depends from claim 2, and it recites the further limitation of “determining the drive test pattern comprises identifying publicly accessible surface pathways based on the proposed geographic regions”. In this regard, the Abstract of the Shakir reference discloses that “[d]ata collection is done by drive tests, from routes to characterize the cellular network”. Still further, the Shakir reference discloses (at section 3) Figures 3 and 4, which show a plurality of pathways and routes—that is a plurality of drive test patterns—on which a vehicle is driven to collect data. Still further, the Shakir reference discloses (at page 2, section 1) (“These KPIs data that influence the coverage and capacity of the network”; “analysis of the 4 G-LTE cellular network is studied based on KPIs reported from the driving test from . . . cellular network operators”; “It analyzes the 4 G-LTE network and gives the theoretical background for the coverage of LTE networks”). Also, Figure 1 of Section 1 of the Shakir reference refers to the KPIs of “radio network performance”, and the Shakir reference (at page 8, Section 4) discloses that the “study may include the coverage area”. Finally, as explained above, the Abstract of the Shakir reference discloses that: “Key Performance Indicator (KPI) gives potential information that [is] need[ed] for successful network deploying, performance study”. Thus, in view of the foregoing, the Shakir reference discloses or at least suggests the further limitation of “determining the drive test pattern comprises identifying publicly accessible surface pathways based on the proposed geographic regions”. Claim 3 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference for the same reasons as claims 1 and 2, and for the foregoing further reasons. Claim 7 depends from claim 1, and it recites the further limitation in which the “emulated network performance metrics emulate key performance indicators (KPIs) that are computed based on live traffic generated by users on the communications network”. In this regard and as explained with respect to claim 1, the Shakir reference discloses (at page 1351, section 3, 1st full paragraph, left-hand column; page 1354, section 4, 1st paragraph, left-hand column) the following: Here is the description of the analyzed [(that is, emulated or simulated)] KPIs that are generated from DT [(drive tests)]. Figure 3 shows the samples of the pathway for the strength of RSRP for OP1 [(OPerator 1)], OP2 [(OPerator 2)], and OP3 [(OPerator 3)], respectively. In addition, Fig. 4 show[s] the pathway samples for the strength of RSRQ for OP1, OP2, and OP3,respectively. The probability density function (PDF) of KPIs data measurements is illustrated in Fig. 5. . . . . [S]tudy performance for each operator in a different city or at a different time (such as morning and night, dynamic and static download, and crowded and non-crowded) for signal repeatability. Also, the study may include the coverage area using the RF propagation models depending on the frequency band. Also, we are planning to study frequency extrapolation for this area and compare it with other cities. . . . Accordingly, the Shakir reference discloses or at least suggests the limitation in which the “emulated network performance metrics emulate key performance indicators (KPIs) that are computed based on live traffic generated by users on the communications network”. Claim 7 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference for the same reasons as claim 1, and for the foregoing further reasons. Independent claim 11 is to a “network performance estimation system for a communications network” comprising: “at least one memory that stores computer executable instructions” and “at least one processor that executes the computer executable instructions to cause actions to be performed”. Like claim 1, which is a “method for estimating performance of a communications network”, claim 11 is to a “network performance estimation system for a communications network”, so that this limitation is disclosed by the Shakir reference for essentially the same reasons as the preamble of claim 1. As explained with respect to claim 1, the Shakir reference discloses or at least suggests the limitation of a “computer-implemented method for estimating performance of a communications network”. Thus, the Shakir reference discloses or at least suggests the limitations of “at least one memory that stores computer executable instructions” and “at least one processor that executes the computer executable instructions to cause actions to be performed” for essentially the same reasons that the Shakir reference discloses or at least suggests the limitation of a “computer-implemented” method. As to the recited “actions” of claim 11, these recited “actions” are like the method steps of claim 1, so that the Shakir reference discloses these recited “actions” for the same reasons as the method steps of claim 1. Claim 11 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference for the foregoing reasons. Claim 12 depends from claim 11, and like claim 2, it recites the further limitation in which the “proposed entire coverage of the communications network includes an entirety of proposed geographic regions to be served by the communications network upon its deployment”. Claim 12 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference for the same reasons as claim 11 and claim 2. Claim 13 depends from claim 12, and like claim 3, it recites the further limitation of “determining the drive test pattern comprises identifying publicly accessible surface pathways based on the proposed geographic regions”. Claim 13 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference for the same reasons as claims 11 and 12 and claim 3, and for the foregoing further reasons. Claim 17 depends from claim 11, and like claim 7, it recites the further limitation in which the “emulated network performance metrics emulate key performance indicators (KPIs) that are computed based on live traffic generated by users on the communications network”. Claim 17 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference for the same reasons as claim 11 and claim 7, and for the foregoing further reasons. Independent claim 18 is to a “non-transitory computer-readable medium storing contents that, when executed by one or more processors, cause actions to be performed”. As explained with respect to claim 1, the Shakir reference discloses or at least suggests the limitation of a “computer-implemented method for estimating performance of a communications network”. Thus, the Shakir reference discloses or at least suggests the limitations of a “non-transitory computer-readable medium storing contents that, when executed by one or more processors, cause actions to be performed” for essentially the same reasons that the Shakir reference discloses or at least suggests the limitation of a “computer-implemented” method. As to the recited “actions” of claim 18, these recited “actions” are like the method steps of claim 1, so that the Shakir reference discloses these recited “actions” for the same reasons as the method steps of claim 1. Claim 18 is therefore rejected under 35 U.S.C. § 102(a)(1) as anticipated by the Shakir reference for the foregoing reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. § 102 and § 103 (or as subject to pre-AIA 35 U.S.C. § 102 and § 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 4, 5, 14 and 15 are rejected under 35 U.S.C. § 103 as being unpatentable over the Shakir reference in view of El-Saleh et al., Measurements and Analyses of 4G/5G Mobile Broadband Networks: An Overview and a Case Study (April 14, 2023) (“the El-Saleh reference”) for the following reasons. Claim 4 depends from claim 3, and it recites the further limitation in which the “determining” of the “drive test pattern comprises associating respective time windows and drive test devices with different portions of the identified surface pathways”. As explained with respect to claim 1, the Shakir reference discloses (at section 3) Figures 3 and 4, which show a plurality of pathways and routes—that is a plurality of drive test patterns—on which a vehicle is driven to collect data. Still further, the El-Saleh reference discloses or at least suggests (at pages 4 and 16) the use of different periods [(that is, time windows)] and drive test devices for use with different pathways (so as to access a particular subscriber service), as follows: (p.4, left hand column) A dataset of several client-side wireless network quality characteristics was collected using Android network monitoring the application of G-NetTrack Pro. The dataset comprises 30 one-hour public transport bus journeys taken at three different periods [(that is, time windows)] throughout the day. [The] acquired data was analyzed and looked into the impacts of time and place on the network’s observed throughput and signal strength. . . . . . . . (p.16) [T]he subscriber’s access to the service depends on the type of the subscriber’s device [(that is, using a particular drive test device for using a particular path having access to a particular subscriber service)] and the package to which he was subscribed, as the network cannot be used in all devices even if the device supports the 5G technology. . . . . (p.22) 4.5.2. Limitations. . . . : Field measurements were limited to specific areas [(such as an identified path)] and a short period [(time window)]. The performance was not representative of all morphologies; for expanding the scope of research and measurement, it is possible to measure in other different regions, including rural, suburban, and urban areas, and increase the period [(time window)] of each region. The measurements were taken in one period of time and most during peak hours, and the network performance was not assessed with diverse climatic conditions. [The] driving test can be performed during peak hours and out of peak and can be extended further to multiple, longer drive tests for each area. . . . . Accordingly, the Shakir reference, in view of the El-Saleh reference, discloses or at least suggests the further limitation in which the “determining” of the “drive test pattern comprises associating respective time windows and drive test devices with different portions of the identified surface pathways”. The Shakir reference and the El-Saleh reference are plainly analogous to the claimed subject matter because they are in the same field of endeavor of cellular (4G/5G) performance estimation techniques. The Shakir reference discloses a method, as in independent claim 1. The El-Saleh reference discloses or at least suggests a known technique that is plainly applicable to the method. It is therefore the case that it would have been obvious to a person having ordinary skill in the art before the effective date filing date of the claimed invention to modify the Shakir reference by applying the known technique, based on the teachings, motivations and/or suggestions of the El-Saleh reference, so as to yield predictable results and resulted in an improved method that includes the capability of associating respective time windows and drive test devices with different identified surface pathways. Claim 4 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the El-Saleh reference, for the same reasons as claims 1, 2 and 3, and for the foregoing further reasons. Claim 5 depends from claim 4, and it recites the further limitation in which the “respective time windows and drive test devices are determined based on at least one of a [A] real-time processing constraint, [B] drive test device accessibility constraint, or [C] performance metrics density requirement”. Since the claim elements [A], [B], or [C] are listed disjunctively, the claim limitation is satisfied if only one of the claim elements [A], [B], or [C] is established. As regards the claim element of [B] a “drive test device accessibility constraint”, the El-Saleh reference discloses (at pages 2 and 16), the following: (page 2, left hand column) A range of user devices, physical disabilities, mobility, and accessibility settings are all real impact measures of MBB performance. . . . MBB networks, such as third-generation (3G) and fourth-generation (4G) wireless technologies, are frequently used to access the Internet for various services. . . . . . . . (p.16) [T]he subscriber’s access to the service depends on the type of the subscriber’s device and the package to which he was subscribed, as the network cannot be used in all devices, even if the device supports the 5G technology. In view of the range of user devices that could be used and their accessibility settings, together with the fact that the subscriber’s access to the service depends on the type of the subscriber’s device and the subscription service, as the network cannot be used in all devices. The time windows and drive test devices would necessarily be determined based on accessibility constraints of the type of the drive test device, as disclosed and explained by the El-Saleh reference. Also, as regards the claim element of [A] a “real-time processing constraint”, a person having ordinary skill in the art would appreciate and understand that the use of time windows and drive test devices of a “computer-implemented method for estimating performance of a communications network” would necessarily be determined based on “real-time processing constraints”—which would be the case for any computer-implemented method. Accordingly, the Shakir reference, in view of the El-Saleh reference, discloses or at least suggests the further limitation in which the “time windows and drive test devices are determined” based on a drive test device accessibility constraint or a real-time processing constraint. The Shakir reference and the El-Saleh reference are plainly analogous to the claimed subject matter because they are in the same field of endeavor of cellular (4G/5G) performance estimation techniques. The Shakir reference discloses a method, as in independent claim 1. The El-Saleh reference discloses or at least suggests a known technique that is plainly applicable to the method. It is therefore the case that it would have been obvious to a person having ordinary skill in the art before the effective date filing date of the claimed invention to modify the Shakir reference by applying the known technique, based on the teachings, motivations and/or suggestions of the El-Saleh reference, so as to yield predictable results and resulted in an improved method that includes the capability of time windows and drive test devices determined based on a drive test device accessibility constraint or a real-time processing constraint. Claim 5 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the El-Saleh reference, for the same reasons as claims 1, 2, 3 and 4, and for the foregoing further reasons. Claim 14 depends from claim 13, and like claim 4, it recites the further limitation in which the “determining” of the “drive test pattern comprises associating respective time windows and drive test devices with different portions of the identified surface pathways”. Claim 14 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the El-Saleh reference, for the same reasons as claims 11, 12, 13 and claim 4, and for the foregoing further reasons. Claim 15 depends from claim 14, and like claim 5, it recites the further limitation in which the “respective time windows and drive test devices are determined based on at least one of a real-time processing constraint, drive test device accessibility constraint, or performance metrics density requirement”. Claim 15 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the El-Saleh reference, for the same reasons as claims 11, 12, 13, 14 and claim 5, and for the foregoing further reasons. Claims 6, 16, and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over the Shakir reference in view of Friedrich, 5G Redefines Drive Testing (May 31, 2020) (“the Friedrich reference”)(at page 3), for the following reasons. Claim 6 depends from claim 1, and it recites the further limitation in which the “causing” of the “performance of the plurality of drive tests in accordance with the drive test pattern” comprises “causing performance of at least two of the drive tests” in [A] a “temporally parallel” or [B] “partially overlapping manner”. Since the claim elements [A] or [B] are listed disjunctively, the claim limitation is satisfied if only one of the claim elements [A] or [B] is established. As regards the claim element of [B] a “partially overlapping manner”, the Friedrich reference discloses (at page 3) this claim element as follows: (p.3) Verifying the field performance and capacity gain of a massive MIMO implementation is critical to the overall performance of a 5G NR network. Field testing requires multiple test UEs distributed throughout a cell. . . . Accurate [drive] testing requires that these test UEs be physically spread across the cell area, rather than all bunched in one area. (It is virtually impossible to isolate users to nonoverlapping beams if the UEs are too tightly packed together). . . . . In view of the foregoing, the drive tests, which are done in accordance with a drive test pattern as explained as to claim 1, necessarily can be done in an overlapping manner because of the overlapping beams (since “it is virtually impossible to isolate users to nonoverlapping beams if the UEs are too tightly packed together). Accordingly, the Shakir reference, in view of the Friedrich reference, discloses or at least suggests the further limitation in which the “causing” of the “performance of the plurality of drive tests in accordance with the drive test pattern” comprises “causing performance of at least two of the drive tests” in a “partially overlapping manner”. The Shakir reference and the Friedrich reference are plainly analogous to the claimed subject matter because they are in the same field of endeavor of cellular (4G/5G) performance estimation techniques. The Shakir reference discloses a method, as in independent claim 1. The Friedrich reference discloses or at least suggests a known technique that is plainly applicable to the method. It is therefore the case that it would have been obvious to a person having ordinary skill in the art before the effective date filing date of the claimed invention to modify the Shakir reference by applying the known technique, based on the teachings, motivations and/or suggestions of the Friedrich reference, so as to yield predictable results and resulted in an improved method that includes the capability of “causing performance of at least two of the drive tests” in a “partially overlapping manner”. Claim 6 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the Friedrich reference, for the same reasons as claim 1, and for the foregoing further reasons. Claim 16 depends from claim 11, and like claim 6, it recites the further limitation in which the “causing” of the “performance of the plurality of drive tests in accordance with the drive test pattern comprises causing performance of at least two of the drive tests in a temporally parallel or partially overlapping manner”. Claim 16 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the Friedrich reference, for the same reasons as claim 11 and claim 6, and for the foregoing further reasons. Claim 19 depends from claim 18, and like claim 6, it recites in which the “causing” of the “performance of the plurality of drive tests in accordance with the drive test pattern comprises causing performance of at least two of the drive tests in a temporally parallel or partially overlapping manner”. Claim 19 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the Friedrich reference, for the same reasons as claim 18 and claim 6, and for the foregoing further reasons. Claims 8, 9, 10, and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over the Shakir reference in view of the Friedrich reference and Lamberti, 19 Network Metrics: How to Measure Network Performance, Obkio (March 6, 2023) (“the Lamberti reference”), for the following reasons. Claim 8 depends from claim 1, and it recites the further limitation of “classifying failures of the communications network based on the emulated network performance metrics”, “prior to live traffic being generated by users on the communications network”. The Friedrich reference discloses (at pages 1, 3, and 5) drive testing (which obtains KPIs (metrics)), which is necessarily prior to live traffic being generated by users on the communications network, as follows: (p.1) . . . . [U]ser equipment (UE)-based active field testing on the live network, often referred to as drive testing. . . . . (p.3) Verifying the field performance and capacity gain of a massive MIMO implementation is critical to the overall performance of a 5G NR network. Field testing requires multiple test UEs distributed throughout a cell. . . . Accurate testing requires that these test UEs be physically spread across the cell area, rather than all bunched in one area. (It is virtually impossible to isolate users to nonoverlapping beams if the UEs are too tightly packed together). (p.5) Testing the latency and peak throughput of the 5G connection is another critical step for identifying [(classifying)] points of failure for dropped calls or handover issues. Drive testing at the application layer is crucial for translating the user experience into measurable KPIs and speeding up the verification of use cases. The Lamberti reference discloses (at pages 45-48, sections 16 and 17) the following: Network Metric #16. Network Error Rate Network error rate is a measure of the number of errors that occur in network traffic, expressed as a percentage of the total number of packets transmitted. Errors can occur in various forms, such as packet loss, packet corruption, and other transmission errors, and can impact network performance and reliability. . . . . Monitoring network error rates is important for network administrators to ensure that their networks are performing at optimal levels. By tracking error rates and identifying [(classifying)] the causes of errors, network administrators can take steps to optimize network performance and improve reliability. For example, they can identify network congestion, hardware failures, or misconfigurations that may be causing errors, and take appropriate action [(remedial actions)] to address these issues. Network KPI Examples for Network Error Rate Network error rate is a KPI that measures the percentage of errors that occur in network traffic. These errors can include packet loss, packet corruption, and other transmission errors that can impact network performance. Some common network KPIs, related to network errors, for network error rate include: Packet Loss Rate. . . .; Bit Error Rate (BER). . . .; Frame Error Rate (FER). . . .; Error Correction Rate (ECR). . . .; Retransmission Rate. . . . . . . . Network Metric #17. TCP Retransmission Rate . . . . A high TCP retransmission rate can indicate issues [(that is, failures)] such as network congestion, packet loss, or network errors, which can impact the performance. . . . To improve the TCP retransmission rate, network administrators may . . . identify [(classify)] and address [(remediate)] these issues. This may involve optimizing network capacity, identifying and replacing faulty network equipment, or adjusting network configuration settings—which are remedial actions. Accordingly, the Shakir reference, in view of the Friedrich reference and the Lamberti reference, discloses or at least suggests the further limitation of “classifying failures of the communications network based on the emulated network performance metrics, prior to live traffic being generated by users on the communications network”. The Shakir reference, the Friedrich reference, and the Lamberti reference are plainly analogous to the claimed subject matter because they are in the same field of endeavor of cellular (4G/5G) performance estimation techniques. The Shakir reference discloses a method, as in independent claim 1. Both the Friedrich reference and the Lamberti reference disclose or at least suggest a known technique that is plainly applicable to the method. It is therefore the case that it would have been obvious to a person having ordinary skill in the art before the effective date filing date of the claimed invention to modify the Shakir reference by applying the known technique, based on the teachings, motivations and/or suggestions of the Friedrich reference and the Lamberti reference, so as to yield predictable results and resulted in an improved device/method that includes the capability of “classifying failures of the communications network based on the emulated network performance metrics, prior to live traffic being generated by users on the communications network”. Claim 8 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the Friedrich reference and the Lamberti reference, for the same reasons as claim 1, and for the foregoing further reasons. Claim 9 depends from claim 8, and it recites the further limitation in which the “classifying” of the “failures” comprises “creating scenarios to simulate the failures based on root cause analysis of one or more errors using the emulated network performance metrics”. Official Notice is taken that persons having ordinary skill in the art can create or use scenarios to simulate failures based on an analysis of identified errors using metrics (which can include network metrics (KPIs)). The Friedrich reference discloses (at pages 1, 3, and 5) drive testing (which obtains KPIs (metrics)), as follows: (p.1) . . . . [U]ser equipment (UE)-based active field testing on the live network, often referred to as drive testing. . . . . (p.3) Verifying the field performance and capacity gain of a massive MIMO implementation is critical to the overall performance of a 5G NR network. Field testing requires multiple test UEs distributed throughout a cell. . . . Accurate testing requires that these test UEs be physically spread across the cell area, rather than all bunched in one area. (It is virtually impossible to isolate users to nonoverlapping beams if the UEs are too tightly packed together). (p.5) Testing the latency and peak throughput of the 5G connection is another critical step for identifying [(classifying)] points of failure for dropped calls or handover issues. Drive testing at the application layer is crucial for translating the user experience into measurable KPIs and speeding up the verification of use cases. The Lamberti reference discloses (at pages 45-48, sections 16 and 17) the following: Network Metric #16. Network Error Rate Network error rate is a measure of the number of errors that occur in network traffic, expressed as a percentage of the total number of packets transmitted. Errors can occur in various forms, such as packet loss, packet corruption, and other transmission errors, and can impact network performance and reliability. . . . . Monitoring network error rates is important for network administrators to ensure that their networks are performing at optimal levels. By tracking error rates and identifying [(classifying)] the causes of errors, network administrators can take steps to optimize network performance and improve reliability. For example, they can identify network congestion, hardware failures, or misconfigurations that may be causing errors, and take appropriate action [(remedial actions)] to address these issues. Network KPI Examples for Network Error Rate Network error rate is a KPI that measures the percentage of errors that occur in network traffic. These errors can include packet loss, packet corruption, and other transmission errors that can impact network performance. Some common network KPIs, related to network errors, for network error rate include: Packet Loss Rate. . . .; Bit Error Rate (BER). . . .; Frame Error Rate (FER). . . .; Error Correction Rate (ECR). . . .; Retransmission Rate. . . . . . . . Network Metric #17. TCP Retransmission Rate . . . . A high TCP retransmission rate can indicate issues [(that is, failures)] such as network congestion, packet loss, or network errors, which can impact the performance. . . . To improve the TCP retransmission rate, network administrators may . . . identify [(classify)] and address [(remediate)] these issues. This may involve optimizing network capacity, identifying and replacing faulty network equipment, or adjusting network configuration settings—which are remedial actions. Accordingly, the Shakir reference, in view of the Friedrich reference and the Lamberti reference, discloses or at least suggests the further limitation in which the “classifying” of the “failures” comprises “creating scenarios to simulate the failures based on root cause analysis of one or more errors using the emulated network performance metrics”. The Shakir reference, the Friedrich reference, and the Lamberti reference are plainly analogous to the claimed subject matter because they are in the same field of endeavor of cellular (4G/5G) performance estimation techniques. The Shakir reference discloses a method, as in independent claim 1. Both the Friedrich reference and the Lamberti reference disclose or at least suggest a known technique that is plainly applicable to the method. It is therefore the case that it would have been obvious to a person having ordinary skill in the art before the effective date filing date of the claimed invention to modify the Shakir reference by applying the known technique, based on the teachings, motivations and/or suggestions of the Friedrich reference and the Lamberti reference, so as to yield predictable results and resulted in an improved device/method that includes the capability of “classifying” of the “failures” comprises “creating scenarios to simulate the failures based on root cause analysis of one or more errors using the emulated network performance metrics”. Claim 9 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the Friedrich reference and the Lamberti reference, for the same reasons as claims 1 and 8, and for the foregoing further reasons. Claim 10 depends from claim 8, and it recites the further limitation of “identifying potential remedial actions with respect to one or more of the failures”. As regards remedial actions, the Lamberti reference discloses (at pages 45-48, sections 16 and 17) the following: Network Metric #16. Network Error Rate Network error rate is a measure of the number of errors that occur in network traffic, expressed as a percentage of the total number of packets transmitted. Errors can occur in various forms, such as packet loss, packet corruption, and other transmission errors, and can impact network performance and reliability. . . . . Monitoring network error rates is important for network administrators to ensure that their networks are performing at optimal levels. By tracking error rates and identifying the causes of errors, network administrators can take steps [(remedial actions)] to optimize network performance and improve reliability. For example, they can identify network congestion, hardware failures, or misconfigurations that may be causing errors, and take appropriate action [(remedial actions)] to address these issues. . . . . Network Metric #17. TCP Retransmission Rate . . . . A high TCP retransmission rate can indicate issues [(that is, failures)] such as network congestion, packet loss, or network errors, which can impact the performance. . . . To improve the TCP retransmission rate, network administrators may need to identify and address [(take remedial actions)] [for] these issues. This may involve optimizing network capacity, identifying and replacing faulty network equipment, or adjusting network configuration settings[(—which are remedial actions)]. As regards identifying failures, the Friedrich reference discloses (at pages 1, 3, and 5), the following: (p.1) . . . . [U]ser equipment (UE)-based active field testing on the live network, often referred to as drive testing. . . . . (p.3) Verifying the field performance and capacity gain of a massive MIMO implementation is critical to the overall performance of a 5G NR network. Field testing requires multiple test UEs distributed throughout a cell. . . . Accurate testing requires that these test UEs be physically spread across the cell area, rather than all bunched in one area. (It is virtually impossible to isolate users to nonoverlapping beams if the UEs are too tightly packed together). . . . . (p.5) Testing the latency and peak throughput of the 5G connection is another critical step for identifying points of failure for dropped calls or handover issues. Drive testing at the application layer is crucial for translating the user experience into measurable KPIs and speeding up the verification of use cases. . . . . Accordingly, the Shakir reference, in view of the discloses or at least suggests the further limitation of “identifying potential remedial actions with respect to one or more of the failures”. The Shakir reference, the Friedrich reference, and the Lamberti reference are plainly analogous to the claimed subject matter because they are in the same field of endeavor of cellular (4G/5G) performance estimation techniques. The Shakir reference, in view of the Friedrich reference and the Lamberti reference, discloses a method, as in independent claim 1. Both the Friedrich reference and the Lamberti reference disclose or at least suggest a known technique that is plainly applicable to the method. It is therefore the case that it would have been obvious to a person having ordinary skill in the art before the effective date filing date of the claimed invention to modify the Shakir reference by applying the known technique, based on the teachings, motivations and/or suggestions of the Friedrich reference and the Lamberti reference, so as to yield predictable results and resulted in an improved method that includes the capability of “identifying potential remedial actions with respect to one or more of the failures”. Claim 10 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the Friedrich reference and the Lamberti reference, for the same reasons as claims 1 and 8, and for the foregoing further reasons. Claim 20 depends from claim 18, and like claim 8, it recites the further limitation of “classifying failures of the communications network based on the emulated network performance metrics, prior to live traffic being generated by users on the communications network”. Claim 20 is therefore rejected under 35 U.S.C. § 103 as unpatentable over the Shakir reference, in view of the Friedrich reference and the Lamberti reference, for the same reasons as claim 18 and claim 8, and for the foregoing further reasons. Response to Arguments Applicant’s arguments with respect to the pending claims have been considered but are moot because of the new grounds of rejection that are attributable to the new amendments. (See M.P.E.P. FP 7.38). While not repeated here in the Response Section, see the above rejection of claims 1, 11, and 18 for the relevant citations found in Shakir along with their interpretation that discloses the amended limitations. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON C. DEDITCH whose telephone number is (571)272-4780. The examiner can normally be reached Monday through Thursday at 8:00 am to 6:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rafael Perez-Gutierrez can be reached on 571-272-7915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Aaron C. Deditch/Examiner, Art Unit 2642 /Rafael Pérez-Gutiérrez/Supervisory Patent Examiner, Art Unit 2642 3/12/2026
Read full office action

Prosecution Timeline

Jun 01, 2023
Application Filed
Sep 13, 2025
Non-Final Rejection — §102, §103, §112
Nov 19, 2025
Interview Requested
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Examiner Interview Summary
Dec 10, 2025
Response Filed
Mar 09, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581453
INFORMATION TRANSMISSION METHOD AND RELATED DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12538253
POSITIONING METHOD AND APPARATUS INDICATING THAT A TARGET RANDOM ACCESS PROCESS IS A RANDOM ACCESS PROCESS FOR POSITIONING BY USING INFORMATION, TERMINAL AND BASE STATION
2y 5m to grant Granted Jan 27, 2026
Patent 12512903
SYSTEMS AND METHODS FOR SERVICE RESTORATION FOR SATELLITE COMMUNICATIONS RESILIENCE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 11 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month