Prosecution Insights
Last updated: April 19, 2026
Application No. 18/397,781

CELLULAR FIELD TESTING AUTOMATION TOOL INCLUDING NETWORK CONDITIONS MONITOR, AUDIBLE AND VISIBLE CONCLUSION ALARMS, AND DATA STALLS

Non-Final OA §103
Filed
Dec 27, 2023
Examiner
ABDULLAEV, ERKIN SHAVKATOVICH
Art Unit
2648
Tech Center
2600 — Communications
Assignee
BOOST SUBSCRIBERCO L.L.C.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
7 granted / 8 resolved
+25.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
55.8%
+15.8% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Election/Restrictions Applicant’s election without traverse of Species 1 in reply filed on 03/11/2026 is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/29/2023 , 11/19/2024 , 12/24/2025 , 01/13/2026 , and 02/24/2026 has been considered by examiner and made of record in the application file. Specification The abstract of the disclosure is objected to because the paragraph is over 150 word limit . Examiner suggests to remove “( i )”, “(ii)”, “(iii)”, and “(iv)” . A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim (s) 1-3, 6-9, 21-23, and 26-30 are rejected under 35 U.S.C. 103 as being unpatentable over SCHMIDT (US-20150189525-A1) in view of Linkola (US-20220140919-A1) in view of Liu (US-20210385646-A1) in further view of Bugenhagen ( US - 20140280904 - A1 ) . Regarding Claim 1 , SCHMIDT discloses a method comprising: initiating a cellular field testing tool that tests (paragraph [0023], Fig.1, "The proposed test architecture may involve a testing platform 101, a reference device 103, and device under test (DUT) 105." (i.e., testing platform 101 that tests. ) ) , during a test case, a condition of cellular network connectivity to both a device under test and a reference device (paragraph [0025], Fig.1, "The testing platform 101 provides at least two functions, comprising of: s toring the device logs; and hosting the application 107 for remotely controlling the devices 103 and 105 to trigger testing and also to analyze device logs. " and paragraph [0038], "The application 107 may create a profile with the following information: network mode (e.g., 4G, 3G, etc.), Internet Protocol (IP) Multimedia Subsystem (IMS) registration Status (e.g., IMS registerrofiles to check for consistency." and paragraph [0046], Fig.3, "In step 301, the testing platform 10ed, Not Registered, etc.), RSRP, Battery Level, and Idle computer processing unit (CPU) utilization. Once the profile is sent from the reference device 103 to the DUT 105, the application 107 in the DUT 105 compares both of the p1 receives an input for specifying one or more parameters for evaluating a device under testing, wherein the one or more parameters include a signal strength parameter, a duration of testing parameter, a number of cell sites parameter, a test route speed parameter, a location parameter, a testing type parameter, a time parameter, a frequency band parameter, a handoff type parameter, or a combination thereof." (i.e., testing cellular connectivity such as “a test route speed parameter”, “a frequency band parameter” or “a signal strength parameter”. ) ) ; comparing, by the cellular field testing tool, how well the device under test performs with how well the reference device performs (paragraph [0024], Fig.1, " the reference device 103 may be the control group in a scientific experiment analogy. Device performance is a function of device software, hardware and mechanical characteristics. The DUT 105 may match or exceed the reference device 103's performance. The DUT 105's functionality and performance may then be compared with the reference devices 103 and classified accordingly ." and paragraph [0025], “ The testing platform 101 provides at least two functions, comprising of: storing the device logs; and hosting the application 107 for remotely controlling the devices 103 and 105 to trigger testing and also to analyze device logs … the devices 103 and 105 automatically uploads log files to the testing platform 101 for post processing and analysis. ” and paragraph [0063], "The reference devices 103 have undergone thorough tests and has a documented history of consistently above average performance. The device performance measured may test the device's software, hardware, and mechanical characteristics. That is, the reference device 103 generally represents a benchmark for quality and acceptability in the industry. In comparison, the DUT 105 must meet or exceed the reference device 103's performance. The functionality and performance of the DUT 105 are characterized and compared against the reference devices 103." (i.e., comparing the DUT 105 with the reference device 103. The testing platform 101 analyzes the logs created by the devices 103 and 105. ) ) . However, SCHMIDT does not disclose determining, by the cellular field testing tool, a theoretical throughput level corresponding to an expected value of a network connection between the device under test or the reference device and a cellular base station ; and displaying, by the cellular field testing tool, a color-coded graphical representation of a category indicating how well the device under test or the reference device performs in comparison to the theoretical throughput level such that the displaying enables a user to detect whether a problem exists with the network connection to the device under test or the reference device . Linkola discloses determining, by the cellular field testing tool, a theoretical throughput level corresponding to an expected value of a network connection between the device under test or the reference device and a cellular base station (paragraph [0085], Fig.8, "the data throughput and/or data rate of the client device 420 using field-to-lab testing system 40, 60 during the simulated movement along the path 200 can be compared to the theoretical maximum-achievable data throughput and/or theoretical maximum-achievable data rate, as measured by the wireless link monitor 210 in the field test environment 220, to determine the performance of the device-under-test (e.g., client device 420, root AP 430, etc.)." (i.e., Linkola teaches a theoretical maximum-achievable data throughput to determine the performance of the device-under-test. ) ) ; and displaying, by the cellular field testing tool, a color-coded graphical representation of a category indicating how well the device under test or the reference device performs in comparison to the theoretical throughput level such that the displaying enables a user to detect whether a problem exists with the network connection to the device under test or the reference device (paragraph [0085], Fig.8, "For example, referring to FIG. 3, the maximum-achievable data throughput 800 for all mesh network nodes 230-232 as the wireless link monitor 210 is moved along the path 200 in in the field test environment 220 is illustrated in FIG. 8. The achievable data throughput 810 available to the client device 420 can then be compared as a performance measure of the device-under-test (e.g., client device 420 and/or mesh network node 430, 440, 650) in system 40, 60." and paragraph [0086], Fig.9, "Performance can also be characterized across test runs along path 200 using field-to-lab testing system 40, 60, which can indicate statistical and/or run-to-run variations. For example, FIG. 9 illustrates a graph 90 of data throughput available to the client device versus time on test runs 901-903. Time period 910 indicates a run-to-run variation that appears to occur when the client device is simulated to appear close to the second extender AP node 232." (i.e., Fig.8 showing a comparison of the theoretical throughput Fig.8:800 is being compared to the DUT throughput Fig.8:810. The comparison is being sent to a client device to compare the performance. Fig.9 shows throughput versus time. ) ) . SCHMIDT and Linkola are considered to be analogous to the claimed invention because they are in the same field of Supervisory, monitoring or testing arrangements. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified SCHMIDT to implement the method of Linkola comparing the theoretical throughput with the device under test as it enables SCHMIDT to show multiple tests for the user the visually see performances and therefore giving a more accuracy result to the user such as Figure 9 wherein test 901-903 are graphed for user convenience ( Linkola , paragraph [00 86 ], Fig.9:901-903, “ Performance can also be characterized across test runs along path 200 using field-to-lab testing system 40, 60, which can indicate statistical and/or run-to-run variations. For example, FIG. 9 illustrates a graph 90 of data throughput available to the client device versus time on test runs 901-903. Time period 910 indicates a run-to-run variation that appears to occur when the client device is simulated to appear close to the second extender AP node 232. ”) . However, SCHMIDT in view of Linkola do not explicitly disclose a cellular base station ; displaying, by the cellular field testing tool, a color-coded graphical representation of a category indicating how well the device under test or the reference device performs in comparison to the theoretical throughput level such that the displaying enables a user to detect whether a problem exists with the network connection to the device under test or the reference device . Liu discloses a cellular base station (paragraph [0029], "aggregated views for the traffic that share the same provisioning characteristics, e.g., QoS Class Identifier (QCI) and Allocation and Retention Priority (ARP), and served by the same network element may also be identified and correlated. Once serving base station 116 is identified, the associated transport network performance data may be identified and correlated. In addition, data associated with Internet 110 or application servers 111 may be identified and correlated for mobile device 102 and further correlated with the data that capture the aggregated view for the traffic served by the same associated network element. After the correlation of the multiple data sources associated with mobile device 102 and its associated network elements, a comprehensive view of the conditions and performances of mobile device 102 may be created and the associated network elements across different segments of the end-to-end service path for mobile device 102." (i.e., discloses monitoring connection between a mobile device and a base station ) ) ; and displaying, by the cellular field testing tool , a color-coded graphical representation of a category indicating how well the device under test or the reference device performs in comparison to the theoretical throughput level such that the displaying enables a user to detect whether a problem exists with the network connection to the device under test or the reference device (paragraph [0031], Fig.2, "At step 133, based on the correlation of step 132, generating an indication of mobility service health. Each key performance metric may be color-coded based on thresholds that are established to reflect the expected service quality. In a simple example, three color codes may be used: “Green”, “Yellow”, and “Red”." and paragraph [0033], "With continued reference to step 133, key performance metrics may generally be pre-defined, e.g., throughput , packet loss error rate, delay, that capture user experience of mobile wireless service applications." and paragraph [0032], “ Color-coded CQMs may provide intuitive and simple representation of the overall mobility service health of mobile device 102 and associated network elements (e.g., network elements correlated in step 132). As disclosed herein, a map or other graphic may be generated with the correlated network elements and further include CQMs, color-coded performance metrics, or the like. ” and paragraph [0039], “ At step 138 of FIG. 2, server 109 may send an indication of the root cause of the service degradation to be displayed or proactively using the indication of the root cause of the service degradation to alter a device of the network to resolve the degradation. ” and page 11, Table 1 (i.e., pre-defined throughput is reading as theoretical throughput . ) ) . SCHMIDT in view of Linkola and Liu are considered to be analogous to the claimed invention because they are in the same field Supervisory, monitoring or testing arrangements. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified SCHMIDT to implement the method of Liu color code as it enables SCHMIDT to provide intuitive and simple representations of the overall mobility service health of the DUT and network elements ( Liu, paragraph [00 32 ], “ Color-coded CQMs may provide intuitive and simple representation of the overall mobility service health of mobile device 102 and associated network elements (e.g., network elements correlated in step 132). As disclosed herein, a map or other graphic may be generated with the correlated network elements and further include CQMs, color-coded performance metrics, or the like. ”) . However, SCHMIDT in view of Linkola in further view of Liu do not explicitly disclose displaying, by the cellular field testing tool, a color-coded graphical representation of a category indicating how well the device under test . Bugenhagen discloses displaying, by the cellular field testing tool , a color-coded graphical representation of a category indicating how well the device under test or the reference device performs in comparison to the theoretical throughput level such that the displaying enables a user to detect whether a problem exists with the network connection to the device under test or the reference device ( paragraph [0029], “ In another example, the test vector may be sent to the end device 110 with the test vector being looped back by the end device 110 for receipt and analysis by the test device 106. The test vector may be generated by the test device 106 and sent to any of the elements, components, modules, systems, equipment or devices of the communications environment. ” and paragraph [00 64 ], Figs 8A-8C, “ performance results of the test vector may be used to generate a graphical display performance map. For example, a spider web chart may be utilized with protocol sectors, bandwidth, and color, such as green, yellow, and red indicating a category for the various attributes. This type of chart may provide a simple visual display of key performance parameters. Other graphical display charts may provide optimal reporting results depending upon the test vector. ” (i.e., Liu does allude to teaching of displaying of the throughput with color, examiner is relying on Bugenhagen to explicitly teach a testing device can have a color-coded graphical representation to display a category indicating how well the DUT is performing .) ) . SCHMIDT in view of Linkola in further view of Liu and Bugenhagen are considered to be analogous to the claimed invention because they are in the same field Supervisory, monitoring or testing arrangements. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified SCHMIDT to implement the method of Bugenhagen to provide a simple visual display of key performance parameters, and additionally provide different graphical display chart for optimal reporting depending on the type of tests performed ( Bugenhagen , paragraph [00 64 ], “ This type of chart may provide a simple visual display of key performance parameters. Other graphical display charts may provide optimal reporting results depending upon the test vector. ”) . Regarding Claim 2, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen discloses all the limitation of claim 1. SCHMIDT further discloses wherein the method further comprises outputting, by the cellular field testing tool, an indication of how the device under test performs in comparison to how well the reference device performs (paragraph [0060], "The application 107 may upload this log data to the testing platform 101 upon completion of the test route or field test." and paragraph [0063], "The device performance measured may test the device's software, hardware, and mechanical characteristics. That is, the reference device 103 generally represents a benchmark for quality and acceptability in the industry. In comparison, the DUT 105 must meet or exceed the reference device 103's performance. The functionality and performance of the DUT 105 are characterized and compared against the reference devices 103. The testing platform 101 may store the device logs and hosts the requisite tools to synchronize testing and analyze the device logs." and paragraph [0065], "the testing platform 101 may push a test analyzer (not shown) to the reference device 103 via wireless connection 903. Once the testing is completed, the DUT 105 may perform the post-processing analysis before pushing the device logs to the reference device 103. A summary of the results may be displayed on the reference device 103." and paragraph [0070], Fig.13A-13B, "The test log 1311a displays the latest additions to the test log as it is updated with test results." (i.e., examiner points to Fig.13A-13B wherein shows “Test Log” as outputting the result. ) ) . Regarding Claim 3, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen discloses all the limitation of claim 1. Linkola further discloses wherein the cellular field testing tool compares how well the device under test performs with the theoretical throughput level (paragraph [0085], Fig.8, "For example, referring to FIG. 3, the maximum-achievable data throughput 800 for all mesh network nodes 230-232 as the wireless link monitor 210 is moved along the path 200 in in the field test environment 220 is illustrated in FIG. 8. The achievable data throughput 810 available to the client device 420 can then be compared as a performance measure of the device-under-test (e.g., client device 420 and/or mesh network node 430, 440, 650) in system 40, 60 ." and paragraph [0086], "Performance can also be characterized across test runs along path 200 using field-to-lab testing system 40, 60, which can indicate statistical and/or run-to-run variations. For example, FIG. 9 illustrates a graph 90 of data throughput available to the client device versus time on test runs 901-903. Time period 910 indicates a run-to-run variation that appears to occur when the client device is simulated to appear close to the second extender AP node 232.") . The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 6, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen discloses all the limitation of claim 1. Liu further discloses wherein the color-coded graphical representation of the category associates a first color (par.31, “Green”) with a higher degree of how well the device under test performs with the theoretical throughput level than a distinct color-coded graphical representation of a distinct category associated with a second color (par.31, “Yellow”) (paragraph [0031], "At step 133, based on the correlation of step 132, generating an indication of mobility service health. Each key performance metric may be color-coded based on thresholds that are established to reflect the expected service quality. In a simple example, three color codes may be used: “Green”, “Yellow”, and “Red”. “Green” indicates no expected performance issue for any application for this mobile for this measurement interval. “Yellow” indicates some performance issues for some applications during this measurement interval. “Red” indicates potential performance issues for all applications during this measurement interval … " and paragraph [0033], Fig.2:133, "With continued reference to step 133, key performance metrics may generally be pre-defined, e.g., throughput , packet loss error rate, delay, that capture user experience of mobile wireless service applications. Some key performance metrics may be application transport layer protocol specific, e.g., for TCP applications, throughput, retransmission rate, round-trip delay, for RTP applications, packet loss, jitter, etc." (i.e., Examiner reading pre-defined throughput as theoretical throughput and there is testing in throughput in order to obtain the color indicating a category such as Green, Yellow, and Red. ) ) . The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 7, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen discloses all the limitation of claim 6. Liu further discloses wherein the first color corresponds to green (paragraph [0031], "…In a simple example, three color codes may be used: “Green”, “Yellow”, and “Red”. “Green” indicates no expected performance issue for any application for this mobile for this measurement interval…") . The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 8, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen discloses all the limitation of claim 1. Liu further discloses wherein: the category belongs to a plurality of at least three categories (paragraph [0031], Fig.2, " At step 133, based on the correlation of step 132, generating an indication of mobility service health. Each key performance metric may be color-coded based on thresholds that are established to reflect the expected service quality. In a simple example, three color codes may be used: “Green”, “Yellow”, and “Red”.") ; and the cellular field testing tool associates the at least three categories with colors of green, yellow, and red, respectively (paragraph [0031], Fig.2, "“Green” indicates no expected performance issue for any application for this mobile for this measurement interval. “Yellow” indicates some performance issues for some applications during this measurement interval. “Red” indicates potential performance issues for all applications during this measurement interval." (i.e., Red, Yellow, and Green used to reflect performance categorized such as throughput . ) ) . The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 9, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen discloses all the limitation of claim 1. Liu further discloses wherein the cellular field testing tool outputs an indication that an apparent deficiency in performance by the device under test can be due to the problem existing with the network connection rather than due to the device under test itself (paragraph [0038], Fig.2, "At step 137 of FIG. 2, based on the predicted degradation of the mobility service health, determining a root cause of the predicted degradation for mobile device 102. The root cause may be determined by identifying instances of “Red” CQMs. Then, given the identified mobile devices with poor service quality, further review CQMs of the associated network elements across different network segments and service health areas (e.g., RAN CQMs for congestion and performance of the serving cell, transport network CQM, etc.) to identify the network segment(s) and key performance metric(s) that may be seen to cause the predicted poor service quality." (i.e., Determining the “root cause” of the degradation for mobile devices such as network elements and service health. ) ) . The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 21 , which is similar in scope to claim 1 , thus rejected under the same rationale. SCHMIDT discloses a non-transitory memory (paragraph [0077], “ The term "computer-readable medium" as used herein refers to any medium that participates in providing instructions to the processor 1403 for execution. Such a medium may take many forms, including but not limited to non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as the storage device 1409. ”) . Regarding Claim 22 , which is similar in scope to claim 2 , thus rejected under the same rationale. Regarding Claim 23 , which is similar in scope to claim 3 , thus rejected under the same rationale. Regarding Claim 26 , which is similar in scope to claim 6 , thus rejected under the same rationale. Regarding Claim 27 , which is similar in scope to claim 7 , thus rejected under the same rationale. Regarding Claim 28 , which is similar in scope to claim 8 , thus rejected under the same rationale. Regarding Claim 29 , which is similar in scope to claim 9 , thus rejected under the same rationale. Regarding Claim 30 , which is similar in scope to claim 1 , thus rejected under the same rationale. SCHMIDT discloses a system (Fig.1, paragraphs [00 22 ] - [ 00 25 ] ) . Claim (s) 4-5, and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over SCHMIDT (US-20150189525-A1) in view of Linkola (US-20220140919-A1) in view of Liu (US-20210385646-A1) in view of Bugenhagen ( US - 20140280904 - A1 ) in further view of Hu (US-20140105058-A1) . Regarding Claim 4, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen discloses all the limitation of claim 2. However, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen do not explicitly disclose wherein the cellular field testing tool compares how well the device under test performs with the theoretical throughput level by dividing a numerical measurement of how well the device under test performs by the theoretical throughput level as a percentage . Hu discloses wherein the cellular field testing tool compares how well the device under test performs with the theoretical throughput level by dividing a numerical measurement of how well the device under test performs by the theoretical throughput level as a percentage (paragraph [0019], "To estimate wireless link quality, the network device first samples packets' transmission information at each of a plurality of transmission rates. Next, the network device calculates packet delivery ratio with respect to each transmission rate, and estimates average throughput at least based on transmission rates and related packet delivery ratio in current environment. Then, the network device calculates wireless link quality percentage as an .alpha./.beta. ratio, where .alpha. indicates the estimated average throughput in current environment, and .beta. indicates maximum valid throughput in the ideal environment." (i.e., dividing “ an .alpha./.beta. ” to get a percentage. ) ) . SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen and Hu are considered to be analogous to the claimed invention because they are in the same field Supervisory, monitoring or testing arrangements. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified SCHMIDT to implement the method of Hu because such a modification would have been obvious to try . There is finite number of ways to compare theoretical throughput versus real throughput for the user to compare and dividing the data with each other permits a faster comparison between real and hypothetical throughput as the comparison can be easily understandable when the data is in percentage as 100% being ideal and 0% being not ideal ( Hu, paragraph [00 16 ], “ Embodiments of the present disclosure relate to wireless link quality measurement in wireless networks. In particular, the present disclosure relates to improving estimation of link quality for a wireless link between two wireless nodes when the wireless link environment changes in a wireless network. ”) . Regarding Claim 5, SCHMIDT in view of Linkola in view of Liu in view of Bugenhagen in further view of Hu discloses all the limitation of claim 4. Liu further discloses the color-coded graphical representation corresponds to the category as one category within a plurality of categories (paragraph [0031], Fig., "At step 133, based on the correlation of step 132, generating an indication of mobility service health. Each key performance metric may be color-coded based on thresholds that are established to reflect the expected service quality. In a simple example, three color codes may be used: “Green”, “Yellow”, and “Red”.") ; and each one of the plurality of categories respectively corresponds to a different subrange within a plurality of subranges along which the percentage falls (paragraph [0031], Fig.2, "“Green” indicates no expected performance issue for any application for this mobile for this measurement interval. “Yellow” indicates some performance issues for some applications during this measurement interval. “Red” indicates potential performance issues for all applications during this measurement interval. Consider DL packet loss error rate over the air. As an example, a threshold for “Green”: <0.1%; thresholds for “Yellow”: >=0.1% and <1%; and threshold for “Red”: >=1%." and paragraph [0033], "With continued reference to step 133, key performance metrics may generally be pre-defined, e.g., throughput , packet loss error rate, delay, that capture user experience of mobile wireless service applications." (i.e., Liu discloses a percentage for “packet loss error over the air” but as shown in paragraph 33 it’s clear that there is a threshold for throughput and the throughput is in percentage would be categorized in green, yellow, and red as disclosed in par.31 and par.33. ) ) . The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Regarding Claim 24 , which is similar in scope to claim 4 , thus rejected under the same rationale. Regarding Claim 25 , which is similar in scope to claim 5 , thus rejected under the same rationale. Claim (s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over SCHMIDT (US-20150189525-A1) in view of Linkola (US-20220140919-A1) in view of Liu (US-20210385646-A1) in view of Bugenhagen ( US - 20140280904 - A1 ) in further view of Zheng (US-20230380012-A1) . Regarding Claim 10, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen discloses all the limitation of claim 1. However, SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen do not disclose wherein the cellular field testing tool operates on a laptop . Zheng discloses wherein the cellular field testing tool operates on a laptop (paragraph [0018], "the testing platform 102 is located on and/or executed by a computing device (e.g., a server device, a personal computer, a laptop , a tablet computing device, a mobile phone, etc.)." and paragraph [0019], " the testing platform 102 is configured to communicate with user devices 104 via one or more communication networks (e.g., intranets, other private networks, the Internet, and/or other public networks). The user devices 104 may include computing devices, such as laptop computers, personal computers, tablets, mobile phones," and paragraph [0020], " The user devices 104 are further configured to receive or otherwise obtain test instructions 126 from the testing platform 102, execute tests associated with the test instructions 126 using a test execution engine 128," (i.e., the testing platform can be on a laptop performing test on user devices. ) ) . SCHMIDT in view of Linkola in view of Liu in further view of Bugenhagen and Zheng are considered to be analogous to the claimed invention because they are in the same field Supervisory, monitoring or testing arrangements. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified SCHMIDT to implement the method of Zhen g as SCHMIDT describes a drive test vising a number of cell sites (SCHMIDT, paragraph 21) and having the cellular testing tool operate on a laptop provides a convenient way to go to all the test sites to perform a comparison between the DUT and the reference device . Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT Erkin S. Abdullaev whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-4135 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday - 8:00 am - 5:00 pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Wesley Kim can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)272-7867 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FILLIN "Examiner Stamp" \* MERGEFORMAT ERKIN S. ABDULLAEV Examiner Art Unit 2648 /ERKIN ABDULLAEV/ Examiner, Art Unit 2648 /WESLEY L KIM/ Supervisory Patent Examiner, Art Unit 2648
Read full office action

Prosecution Timeline

Dec 27, 2023
Application Filed
Mar 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578413
METHOD FOR POSITIONING USING WIRELESS COMMUNICATION AND ELECTRONIC DEVICE FOR SUPPORTING SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12538116
CELLULAR SERVICE ACTIVATION AND DEACTIVATION ON MOBILE DEVICES
2y 5m to grant Granted Jan 27, 2026
Patent 12498448
ANTI-HOPPING ALGORITHM FOR INDOOR LOCALIZATION SYSTEMS
2y 5m to grant Granted Dec 16, 2025
Patent 12484007
METHOD AND APPARATUS FOR PROCESSING EVENT FOR DEVICE CHANGE
2y 5m to grant Granted Nov 25, 2025
Patent 12445554
METHOD AND DEVICE FOR MANAGING MULTIPLE WIRELESS CONNECTIONS SHARING A LIMITED TRUNK GROUP
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+14.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month