Prosecution Insights
Last updated: April 19, 2026
Application No. 17/616,021

DYNAMIC GENERATION ON ENTERPRISE ARCHITECTURES USING CAPACITY-BASED PROVISIONS

Non-Final OA §101§103
Filed
Dec 02, 2021
Examiner
PADOT, TIMOTHY
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Extreme Networks Inc.
OA Round
7 (Non-Final)
39%
Grant Probability
At Risk
7-8
OA Rounds
3y 9m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
221 granted / 562 resolved
-12.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
39 currently pending
Career history
601
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
8.6%
-31.4% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 562 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims This Non-Final Office Action is in response to Applicant’s Request for Continued Examination (RCE) filed 01/14/2026. In accordance with Applicant’s amendment, claims 1, 8, and 15 are amended. Claims 1-2, 4-9, 11-16, and 18-21 are currently pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submissions filed on 12/10/2025 have been entered. Response to Amendment The amendments to claims 1, 8, and 15 have been considered and addressed in the updated rejections set forth below under 35 U.S.C. §101 and §103. Response to Arguments Response to §101 Arguments - Applicant's arguments (Remarks at pgs. 10-11) with respect to the §101 rejection of claims 1-2, 4-9, 11-16, and 18-21 have been considered, but are not persuasive. Applicant argues that “the claims do not recite an abstract idea” and that “the claims are directed to solving the problem involving dynamically generating enterprise architectures to keep up with improvements in architecture designs and advancements in technology” (Remarks at pg. 10). The Examiner respectfully disagrees. In response to applicant’s argument that the claims do not recite an abstract idea, applicant’s attention is directed to the Step 2A Prong One analysis of the §101 rejection below, which provides step-by-step analysis explaining why each step, but for the generic computing elements and high-level machine learning model/algorithm, could be implemented mentally (i.e., under the “mental processes” abstract idea grouping). Applicant has not effectively rebutted these findings, such as by explaining why any of the steps could not be performed mentally via human observation, evaluation, opinion, or judgment. Moreover, the Examiner emphasizes that no improvement has been shown to “architecture designs and advancements in technology” as alleged by applicant. With respect to the new step of “changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands,” as recited by amended claims 1/8/15, it is first noted that “changing…the enterprise architecture” is tentative in so far as it need only be performed “as needed based on network demands,” however the claim does not require such a need (or network demands) so as to actually invoke the change (i.e., the step is optional). Furthermore, there is no detail concerning “how” the change to architecture is achieved or “what” (or “whom”) performs the change, such that the “changing” is arguably disembodied. Nevertheless, although the step is optional under a broadest reasonable interpretation, it is evaluated under both Step 2A Prong Two and Step 2B in the updated §101 rejection since it could be interpreted as going beyond the “mental processes” abstract idea grouping. In response to applicant’s suggestions that “the processes described herein are computationally complex and cannot be reasonably performed by a human at scale” that such analysis involves “thousands of enterprise networks on a continuous basis” (Remarks at pg. 11), the Examiner emphasizes that the claims neither recite nor inherently a level of complexity precluding a human from practically performing the claim steps falling under the mental processes abstract idea grouping, nor are there any claim limitations that recite or inherently require analysis of thousands of enterprise networks on a continuous basis” as alleged by applicant. Therefore, Applicant’s argument’s is unpersuasive because it relies on applying a much narrower interpretation than the claim language requires by seeking to import limitations from the specification, which is impermissible. See Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875, 69 USPQ2d 1865, 1868 (Fed. Cir. 2004). See also, CollegeNet, Inc. v. Apply Yourself Inc., 418 F.3d 1225, 1231 (Fed. Cir. 2005) (while the specification can be examined for proper context of a claim term, limitations from the specification will not be imported into the claims). In response to applicant’s argument relying on the alleged “sufficient specificity in how the result is achieved” (Remarks at pg. 11) and “the claimed way that a dynamic recommendation is generated” under Step 2A Prong Two and Step 2B, these arguments lack merit because neither the specificity of a claim nor the details falling under the scope of the abstract idea itself (e.g., “generating…one or more recommendations”) are dispositive on subject matter eligibility under §101. We may assume that the techniques claimed are “[g]roundbreaking, innovative, or even brilliant,” but that is not enough for eligibility. Ass’n for Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576, 591 (2013); accord buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1352 (Fed. Cir. 2014). Nor is it enough for subject-matter eligibility that claimed techniques be novel and nonobvious in light of prior art, passing muster under 35 U.S.C. §§ 102 and 103. See Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 89–90 (2012); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (“[A] claim for a new abstract idea is still an abstract idea. The search for a § 101 inventive concept is thus distinct from demonstrating §102 novelty.”); Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1315 (Fed. Cir. 2016) (same for obviousness) (Symantec). Accordingly, applicant’s argument relying on the alleged specificity of how the result is achieved and/or limitations directed to the abstract idea itself are insufficient to render the claims eligible. For the reasons provided above along with the reasons set forth in the updated §101 rejection below, the amendments and arguments are not sufficient to overcome the §101 rejection. Response to §103 Arguments – Applicant's arguments with respect to the §103 rejection of claims 1-2, 4-9, 11-16, and 18-21 (Remarks at pgs. 12-13) have been considered, but are primarily raised in support of the amendments to independent claims 1/8/15, which are believed to be fully addressed via the updated §103 rejection below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-9, 11-16, and 18-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more. Claims 1-2, 4-9, 11-16, and 18-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance with the subject matter eligibility guidance set forth in MPEP 2106. With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106.03), it is first noted that the claimed method (claims 1-2, 4-7, and 21), device (claims 8-9 and 11-14), and non-transitory, tangible computer-readable device (claims 15-16 and 18-20) are each directed to at least one potentially eligible category of subject matter (i.e., process, machine, and article of manufacture). Accordingly, claims 1-2, 4-9, 11-16, and 18-21 satisfy Step 1 of the eligibility inquiry. With respect to Step 2A Prong One of the eligibility inquiry (as explained in MPEP 2106.04), it is next noted that the claims recite an abstract idea that falls under the “Mental Processes” abstract idea grouping within the enumerated groupings of abstract ideas (as set forth in MPEP 2106.04(a)(2)) since the claims describe activities that could be performed in the human mind (including an observation, evaluation, judgment, opinion). With respect to independent claim 1, the limitations reciting the abstract idea are indicated in bold below, whereas the additional elements are identified in plain text and are separately evaluated under Step 2A Prong Two and Step 2B below: receiving, by a server device with a machine learning model, historical information from a plurality of enterprise networks, the historical information comprising information about an enterprise architecture of each of the plurality of enterprise networks (The “receiving” step describes activity that, but for the server device, could be performed in the human mind, such as by human observation, evaluation, or judgment. In addition, the “receiving” step falls under insignificant extra-solution data gathering activity, which is not enough to amount to a practical application under Step 2A Prong Two (MPEP 2106.05(g)), and under Step 2B it is noted that such extra-solution activity has also been recognized as well-understood, routine, and conventional, and thus insufficient to add significantly more to the abstract idea. See MPEP 2106.05(d) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)); analyzing, by the server device, the historical information from the plurality of enterprise networks to generate a network health score for each of the plurality of enterprise networks (The “analyzing” step describes activity that, but for the server device, could be performed in the human mind, such as by human evaluation, judgment, or opinion); training, by the server device, the machine learning model using a plurality of machine learning algorithms based on the historical information and the network health score of each the plurality of enterprise networks, wherein the machine learning model categorizes each of the plurality of enterprise networks by clustering the plurality of networks based on a number of client devices per access point using a density based clustering technique and determines a correlation between a category of each of the plurality of enterprise networks and their respective enterprise architecture using an association algorithm (The “training” and “determines” steps cover activity that, but for the server device, could be performed in the human mind, such as by human evaluation, judgment, or opinion. It is further noted that, although the density based clustering technique is recited at a high level of generality and could be performed mentally or with the aid of pen/paper, this technique also describes activity falling under the “mathematical concepts” abstract idea grouping, however “Adding one abstract idea (math) to another abstract idea…does not render the claim non-abstract.” See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1326-27, 122 USPQ2d 1377, 1379-80 (Fed. Cir. 2017) (claim reciting multiple abstract ideas, i.e., the manipulation of information through a series of mental steps and a mathematical calculation, was held directed to an abstract idea and thus subjected to further analysis in part two of the Alice/Mayo test)); generating, by the server device, using the machine learning model, an enterprise architecture for a first enterprise network based on a category of the first enterprise network and its respective enterprise architecture, the first enterprise network being a new enterprise network or an existing enterprise network from among the plurality of enterprise networks (The “generating” step covers activity that, but for the server device, could be performed in the human mind, such as by human evaluation, judgment, or opinion or with the aid of pen and paper (e.g., description/drawing of a planned/simulated architecture on paper). In addition, the “generating” covers activity such as displaying output, such as a picture or description of the enterprise architecture, which therefore also falls under insignificant extra-solution activity, which is not enough to amount to a practical application under Step 2A Prong Two (MPEP 2106.05(g)), and under Step 2B it is noted that such extra-solution activity has also been recognized as well-understood, routine, and conventional, and thus insufficient to add significantly more to the abstract idea. See MPEP 2106.05(d) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)); and generating, by the server device, using the machine learning model, one or more recommendations for updating the enterprise architecture for the first enterprise network based on a change in the network health score (The “generating…one or more recommendations” step, when interpreted in light of the Specification, e.g., paragraphs 50-51, describes updating a recommendation and an intended result, which covers activity that could be performed in the human mind, such as by human evaluation, judgment, or opinion, such as with the aid of pen and paper to generated the updated recommendation related to the enterprise architecture); changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands (This limitation, although tentative/optional and lacking details as to how the change is accomplished, is nevertheless evaluated below as an additional element since it could be interpreted as going beyond the “mental processes” abstract idea grouping). Independent claims 8 and 15 are directed to a device and non-transitory tangible computer-readable device reciting substantially similar limitations as claim 1, and have therefore been determined to recite the same abstract idea as claim 1. With respect to Step 2A Prong Two of the eligibility inquiry (as explained in MPEP 2106.04(d)), the judicial exception is not integrated into a practical application. Independent claims 1, 8, and 15 recite additional elements directed to: server device, machine learning model/algorithm, device, memory, processor, non-transitory, tangible computer-readable device, and changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands. These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application. The computing elements (server device, device, memory, processor, non-transitory, tangible computer-readable device) amount to using generic computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment (generic computing environment). See MPEP 2106.05(f) and 2106.05(h). The machine learning model/algorithm is recited at a high level of generality and has not been shown to integrate the claim into a practical application. Furthermore, the “receiving” step, even if evaluated as an additional element, at most amounts to insignificant extra-solution activity, which is not indicative of a practical application, as noted in MPEP 2106.05(g). Lastly, the step for changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands has been considered as an additional element, however the “changing” step fails to provide a practical application for the following reasons: First, the performance of the “changing…the enterprise architecture” is tentative in so far as it need only be performed “as needed based on network demands,” however the claim does not require such a need (or network demands) so as to actually invoke the change (i.e., the step is optional). Second, there is no detail concerning “how” the change to architecture is achieved or “what” (or “whom”) performs the change, such that the “changing” is arguably disembodied. Third, this step may be considered as nothing more than generally linking the use of a judicial exception to a particular technological environment given its high level of generality and any specificity as to how the enterprise architecture is changed. MPEP 2106.05(h). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B of the eligibility inquiry (as explained in MPEP 2106.05), it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Independent claims 1, 8, and 15 recite additional elements directed to: server device, machine learning model/algorithm, device, memory, processor, non-transitory, tangible computer-readable device, and changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands. These additional elements have been evaluated, but fail to add significantly more to the claims. The computing elements (server device, device, memory, processor, non-transitory, tangible computer-readable device) amount to using generic computing elements (computer hardware, interface) or instructions/software (engine) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment (generic computing environment) and does not amount to significantly more than the abstract idea itself. Notably, Applicant’s Specification (see, e.g., pars. 30 and 101 of Spec.) suggests that virtually any computing device(s) under the sun may be used to implement the invention, including generic computers. See, for example, paragraph [0101] of the Specification, which describes that “Computer system 1200 may also be any of a personal digital assistant (PDA), desktop workstation, laptop or notebook computer, netbook, tablet, smart phone, smart watch or other wearable, appliance, part of the Internet-of-Things, and/or embedded system, to name a few non-limiting examples, or any combination thereof.” Therefore, the generic computing elements or computer-executable instructions (software) merely serve to tie the abstract idea to a particular operating environment, which does not add significantly more to the abstract idea. See, e.g., Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976; Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Even if evaluated as an additional element, the “receiving” step describes insignificant extra-solution activity, which has been recognized as well-understood, routine, and conventional, and thus insufficient to add significantly more to the abstract idea. See MPEP 2106.05(d) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Next, the machine learning model and algorithms are considered well-understood, routine, and conventional in the art, and therefore do not add significantly more to the claims. See, e.g., You et al, US 2012/0191531 (par. 37: “model 514 may comprise, for example, a model obtained using any of a variety of well-known machine learning techniques”). See also, Chickering et al., US Pat. No. 6,831,663 (col. 9, lines 53-58: “obtaining a probabilistic model 300, such as by learning or creating one using conventional machine learning techniques”). Lastly, the step for changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands has been considered as an additional element, however the “changing” step fails to provide significantly more for the following reasons: First, the performance of the “changing…the enterprise architecture” is tentative in so far as it need only be performed “as needed based on network demands,” however the claim does not require such a need (or network demands) so as to actually invoke the change (i.e., the step is optional). Second, there is no detail concerning “how” the change to architecture is achieved or “what” (or “whom”) performs the change, such that the “changing” is arguably disembodied. Third, the “changing” step amounts to mere instructions to apply a judicial exception by describing nothing more than an idea of a solution or outcome given the lack of meaningful detail as to how such a change is made. See, e.g., Affinity Labs of Texas v. DirecTV, LLC, 838 F.3d 1253, 1262-63, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (Wireless delivery of out-of-region broadcasting content to a cellular telephone via a network without any details of how the delivery is accomplished). See also, Intellectual Ventures v. Erie Indem. Co., 850 F.3d 1315, 1331, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (Remotely accessing user-specific information through a mobile interface and pointers to retrieve the information without any description of how the mobile interface and pointers accomplish the result of retrieving previously inaccessible information). See also, MPEP 2106.05(f). Furthermore, “changing, by the first enterprise network, the enterprise architecture …” is considered well-understood, routine, and conventional activity in the art. See, e.g., Gupta et al. (US 2012/0072893), noting for example at par. [0003] that “software on network elements may be upgraded or changed. Various approaches are known in the arts for upgrading the software on the network elements.” For the reasons above, the “changing” step fails to add significantly more to the claims. In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. Dependent claims 2, 4-7, 9, 11-14, 16, and 18-21 recite the same abstract idea as the independent claims along with further steps/details that could also be performed in the human mind (including an observation, evaluation, judgment, opinion), therefore falling under the “Mental Processes” grouping, along with the same or substantially the same additional elements (generic computing elements and machine learning model/algorithms) as addressed above in the analysis of claims 1, 8, and 15. The same facts, evidence, and conclusions set forth above regarding the generic computing elements and machine learning model/algorithms is relied on for the Step 2A Prong Two and Step 2B analysis of these additional element as recited in the dependent claims. The language in claims 2/9/16 reciting “wherein the plurality of machine learning algorithms comprise a statistical inference algorithm and an association algorithm” has been evaluated and the same rationale applied to the “machine learning” above is applicable to the machine learning recited in these claims. Furthermore, the statistical inference algorithm and an association algorithm, when afforded a broadest reasonable interpretation, cover activity that could be performed in the human mind, such as by human evaluation, judgment, or opinion. Moreover, these algorithms are recited at a high level of generality and could be implemented via mathematical algorithms or equations, however “Adding one abstract idea (math) to another abstract idea…does not render the claim non-abstract.” See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1326-27, 122 USPQ2d 1377, 1379-80 (Fed. Cir. 2017) (claim reciting multiple abstract ideas, i.e., the manipulation of information through a series of mental steps and a mathematical calculation, was held directed to an abstract idea and thus subjected to further analysis in part two of the Alice/Mayo test). The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to a practical application or significantly more than the abstract idea itself. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6-9, 13-16, and 19-20 are rejected under 35 U.S.C. §103 as unpatentable over Mermoud et al. (US 2019/0028909, hereinafter “Mermoud”) in view of Chen (US Patent No. 10,536,389) in view of Kaplunov et al. (US 2019/0036789, hereinafter “Kaplunov”). Claims 1/8/15: As per claim 1, Mermoud teaches a method, comprising: receiving, by a server device with a machine learning model (paragraphs 12 and 24-25: environment 150 that includes servers; received network metrics as input to a machine learning-based predictive scoring model), historical information from a plurality of enterprise networks, the historical information comprising information about an enterprise architecture of each of the plurality of enterprise networks (paragraph 46 and Figs. 1, 3, and 5-6: During operation, network data collection platform 304 may receive a variety of data feeds that convey collected data 334 from the devices of branch office 306 and campus 308, as well as from network services and network control plane functions 310; See also, paragraph 70: GUI may further include a time selector 510 that allows the administrator to visualize the prior, present, and/or predicted future health status of the network; See also, paragraphs 25 and 27: network 100 may include any number of local networks, data centers, cloud environments, devices/nodes, servers, etc.; network 100 may include one or more mesh networks); analyzing, by the server device, the historical information from the plurality of enterprise networks to generate a network health score for each of the plurality of enterprise networks (paragraphs 25, 49, 60, 68, and 70: Machine learning-based analyzer 312 may include any number of machine learning models to perform the techniques herein, such as for cognitive analytics, predictive analysis, and/or trending analytics; The techniques herein introduce an adaptive health status scoring approach to network assurance that does not rely on predefined and static rules. In some aspects, the techniques herein may leverage machine learning, such as a regression model, to apply learned and adaptive health status thresholds to the monitored network as part of a predictive scoring model; the health status score may indicate a network throughput health status, a roaming health status, interference health status, or the like); training, by the server device, a machine learning model using a plurality of machine learning algorithms based on the historical information and the network health score of each the plurality of enterprise networks (paragraphs 25 and 54: e.g., Accordingly, analyzer 312 may rely on a set of machine learning processes that work in conjunction with one another and, when assembled, operate as a multi-layered kernel. This allows network assurance system 300 to operate in real-time and constantly learn and adapt to new network conditions and traffic characteristics. In other words, not only can system 300 compute complex patterns in highly dimensional spaces for prediction or behavioral analysis, but system 300 may constantly evolve according to the captured data/observations from the network), wherein the machine learning model categorizes each of the plurality of enterprise networks… (paragraphs 37, 60, 79-82 and Figs. 5A-C: e.g., utilize machine learning techniques, to enforce policies and to monitor the health of the network; may leverage machine learning, such as a regression model, to apply learned and adaptive health status thresholds to the monitored network as part of a predictive scoring model; visualize the health status of the network across different points in time and/or different categories of health statuses) by clustering…using a density based clustering technique (par. 39: machine learning techniques that network assurance process 248 can employ may include, but are not limited to, nearest neighbor (NN) techniques (e.g., k-NN models, replicator NN models, etc.), statistical techniques (e.g., Bayesian networks, etc.), clustering techniques (e.g., k-means, mean-shift, etc.), neural networks (e.g., reservoir networks, artificial neural networks, etc.), support vector machines (SVMs), logistic or other regression, Markov models or chain [Examiner’s Note: Mean-shift clustering is a density-based clustering technique]); and generating, by the server device, using the machine learning model, an enterprise architecture for a first enterprise network based on … of the first enterprise network and its respective enterprise architecture, the first enterprise network being a new enterprise network or an existing enterprise network from among the plurality of enterprise networks (paragraphs 25 and 56: In various embodiments, cloud service 302 may further include an automation and feedback controller 316 that provides closed-loop control instructions 338 back to the various devices in the monitored network. For example, based on the predictions by analyzer 312, the evaluation of any predefined health status rules by cloud service 302, and/or input from an administrator or other user via input 318, controller 316 may instruct an endpoint device, networking device in branch office 306 or campus 308, or a network service or control plane function 310, to adjust its operations (e.g., by signaling an endpoint to use a particular AP 320 or 328, etc.)); generating, by the server device…one or more recommendations for updating the enterprise architecture for the first enterprise network based on a change in the network health score (paragraphs 76, 80, and 85: health status scores for the network are conveyed to the network administrator or other user of the network assurance system. In various embodiments, visualization data 412 may include an indication of the predicted health score for presentation by an electronic display in conjunction with a visualization; GUI may display a status indicator 504 thereby indicating the status of the network. For example, as shown in FIG. 5A, the overall health status of the network in terms of device throughput may be ‘Good’ based on one or more adaptive thresholds of the predictive model. In addition, the GUI may also present various metrics 506 used as part of the prediction (e.g., some of the metrics used as part of input feature vector M.sub.i for the shown location); example of the AP experiencing a roaming issue that affects its health and contributes to a decline in the overall health of the network…the network assurance system may offer suggested solutions [i.e., one or more recommendations] to the administrator via effect 604, such as by adjusting the transmission strengths of the AP and one of its neighbors, so as to prevent clients from switching back and forth between the APs [wherein the suggested solution, if/when implemented by the administrator, enables the network to change the enterprise architecture as needed based on network demands, e.g., adjusting transmission strength would be a change that prevents clients from switching back and forth between APs and as a result would raise network health score]). Mermoud does not explicitly teach: categorizes each of the plurality of enterprise networks by clustering the plurality of enterprise networks based on a number of client devices per access point…and determines a correlation between a category of each of the plurality of enterprise networks and their respective enterprise architecture using an association algorithm; a category of the first enterprise network; generating… using the machine learning model, one or more recommendations; changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands. Chen teaches: categorizes each of the plurality of enterprise networks by clustering the plurality of enterprise networks based on a number of client devices per access point… (col. 8 line 37 – col. 10 line 53; col. 11 lines 17-48; and Figs. 1-2: e.g., As an example, connection capacity group 128 is illustrated in FIG. 1 as including five available ports 134, whereas connection capacity group of data center 120 is illustrated as including three available ports 130. Any number of available ports 134 and available ports 130 may be available at a particular point in time, the five available ports 134 of connection capacity group 128 and the three available ports of connection capacity group 126 are given as an example to illustrate a moment in time when connection capacity group 128 has a greater quantity of available ports than connection capacity group 126; Connectivity coordinator 104 may determine bias response information for available ports 130 at connection capacity group 126 and for available ports 134 at connection capacity group 128 by applying the determined capacities for connection capacity group 126 and connection capacity group 128 to capacity bias model; FIG. 2 is a block diagram of a provider network that includes multiple connection capacity groups [with respective number of client devices connected to access points] that accept dedicated physical connections from multiple client networks, according to some embodiments. Connectivity coordinator 206, capacity bias model 208, and connection capacity database 204 may function in the same manner as connectivity coordinator 104, capacity bias model 106, and connection capacity database 102 described in regard to FIG. 1) and determines a correlation between a category of each of the plurality of enterprise networks and their respective enterprise architecture using an association algorithm (Fig. 9 and col. 22 line 33 – col. 23 lines 39: describing/displaying determined correlation/association between different capacity groups [categories] of enterprise networks and corresponding architecture via steps/algorithm for determining the association, such as shown in Fig. 9’s Connectivity Center Home Page displaying multiple categories of networks and architecture specifications such as Data Center Capacity Group A, Bandwidth 10GBPS, Rate $0.30/hour – e.g., (71) FIG. 9 is a diagram illustrating a graphical user interface for selecting a connection capacity group in response to a request for a dedicated physical connection from a client network to a provider network, according to some embodiments. A user may be displayed interface 900 in response to sending a connectivity request as described above in regard to FIG. 8. In some embodiments, a selection of a connection capacity group in response to a request for a dedicated physical connection from a client network to a provider network may be indicated by other means than a graphical user interface. For example, a selection may be indicated in a programmatic response, indicated via a command line interface; displays bias response information included in a response 616 such as different prices for different connection capacity groups based on a determined capacity at each respective connection capacity group. Interface 900 includes instructions 902 instructing a user on how to select a connection location (connection capacity group). Blocks 904, 906, 908, and 910 respectively include bias response information for a connection capacity group at connection locations 1, 2, 3, and 4. As may be noted in the previous figure, FIG. 8); a category of the first enterprise network (col. 8 line 37 – col. 10 line 53; col. 11 lines 17-48; and Figs. 1-2: e.g., capacity groups; provider network that includes multiple connection capacity groups that accept dedicated physical connections from multiple client networks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Mermoud with Chen because the references are analogous since each is directed to computer-implemented features for managing the performance of computer network architecture, which is within applicant’s field of endeavor of generating enterprise architectures, and because modifying Mermoud to incorporate Chens’s categorizing by clustering networks based on a number of client devices per access point and determining a correlation between a category of the networks and respective architecture, as claimed, would serve the motivation to visualize health status of networks across different categories of health statuses (Mermoud at paragraph 79), would help establish and manage dedicated connections from client networks (Chen at col. 12 lines 27-33), and would help ensure healthy network performance (Mermoud at paragraph 3) in pursuit of optimizing performance for different customers and network types (Chen at col. 1 lines 38-40); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Mermoud and Chen do not explicitly teach: generating… using the machine learning model, one or more recommendations (Examiner’s Note: It is noted that the Mermoud does teach using a machine learning model in at least paragraphs 38-40 and also teaches generating recommendations in at least paragraph 85, but Mermoud does not teach using the ML model to make the recommendations); changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands. Kaplunov teaches: generating… using the machine learning model, one or more recommendations (paragraphs 8, 11, 13-15, 42, 69, 90, 101, and 124: e.g., FIG. 4 is a flow chart of an example process for using machine learning to generate recommendations that may be implemented to improve network performance; By using machine learning to generate recommendations that may be implemented to improve network performance, the network management platform conserves network resources by determining efficient routing and rerouting paths, reduces cost (e.g., by saving energy resources), improves customer service (e.g., by reducing customer churn), and/or the like); changing, by the first enterprise network, the enterprise architecture based on the one or more recommendations as needed based on network demands (paragraphs 10-11, 120-121, and Fig. 4: As demand for mobile devices and internet of things (IoT) devices continues to increase, network service providers may need to improve or modify network infrastructure to maintain or improve upon network performance; network management platform to use machine learning to generate recommendations that may be implemented to improve network performance; network…platform 230 generates a recommendation to modify resources within one or more data centers 250. In this case, network management platform 230 may provide an instruction to modify physical resources (e.g., by installing or removing equipment at a data center 250) and/or may use an application programming interface (API) to automatically modify (e.g., increase, decrease, etc.) virtual resources within one or more data centers. Additionally, or alternatively, network management platform 230 may use an API to automatically reconfigure resources within a data center (e.g., by reallocating virtual resources, by modifying a routing algorithm, etc.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Mermoud/Chen with Kaplunov because the references are analogous since each is directed to computer-implemented features for managing the performance of computer network architecture, which is within applicant’s field of endeavor of generating enterprise architectures, and because modifying Mermoud/Chen such that the recommendations are generated using a machine learning model and to implement a change in accordance with the recommendation as needed based on network demands, as taught by Kaplunov, would serve the motivation to improve network performance and conserve network resources by determining efficient routing and rerouting path to reduce costs (Kaplunov at paragraph 124), and would help ensure healthy network performance (Mermoud at paragraph 3); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 8 and 15 are directed to a device (with memory and processor) and a non-transitory, tangible computer-readable device for performing similar limitations as those recited in claim 1 and addressed above. Mermoud, in view of Chen/Kaplunov, teaches a device (with memory and processor) and a non-transitory, tangible computer-readable device for performing the limitations discussed above (Mermoud at paragraphs 30-34: e.g., node/device; computing devices; processor and memory; computer-readable media; See also, Chen at col. 24 lines 41-58: the methods may be implemented by a computer system (e.g., a computer system as in FIG. 11; See also, Kaplunov at paragraph 66) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors), and claims 8/15 are therefore rejected using the same references and for substantially the same reasons as set forth above. Claims 2/9/16: Mermoud further teaches wherein receiving the historical information comprises continuously receiving the historical information, and wherein the method further comprises: updating the network health score for each of the plurality of enterprise networks based on the continuously receiving historical information (paragraphs 46, 60, and 73: e.g., model may be adjusted over time based on feedback from the monitored network, users, and or administrators, thereby refining over time what is considered healthy vs. unhealthy; scoring model of SPM 408 will be updated using explicit feedback 414 during its next retraining period, to improve its health score predictions); and training the machine learning model based on the continuously received historical information and the updated network health scores (paragraphs 54, 60, and 73: e.g., techniques herein may leverage machine learning, such as a regression model, to apply learned and adaptive health status thresholds to the monitored network as part of a predictive scoring model. Such a model may be adjusted over time based on feedback from the monitored network, users, and or administrators, thereby refining over time what is considered healthy vs. unhealthy; scoring model of SPM 408 will be updated using explicit feedback 414 during its next retraining period, to improve its health score predictions). Claims 6/13/19: Mermoud further teaches: monitoring a performance of the first enterprise network (paragraphs 34, 46, and 60: e.g., monitor the state of the network; network data collection platform 304 may receive a variety of data feeds that convey collected data 334 from the devices of branch office 306 and campus 308, as well as from network services and network control plane functions; apply learned and adaptive health status thresholds to the monitored network as part of a predictive scoring model. Such a model may be adjusted over time based on feedback from the monitored network, users, and or administrator); calculating a change in the network health score for the first enterprise network based on the monitored performance (paragraphs 61 and 73: e.g., The device predicts a health status score for the networking equipment in the physical location using the received network metrics as input to a machine learning-based predictive scoring model. The device provides an indication of the predicted health status score in conjunction with a visualization of the physical location for display by an electronic display. The device adjusts the predictive scoring model based on feedback regarding the predicted health status score; provide explicit feedback 414 to the network assurance system indicating that the network location was actually unhealthy at that time. This way, the predictive scoring model of SPM 408 will be updated using explicit feedback 414 during its next retraining period, to improve its health score prediction); determining a cause of the change in the network health score (paragraph 68: e.g., Status prediction module (SPM) 408 may receive the aggregated metrics 410 (M.sub.i) from MAM 406 as input and outputs a predicted health status score, denoted as S.sub.i herein. In various embodiments, the status score S.sub.i can be a scalar or have multiple components (e.g., for on-boarding/roaming experience, for voice, for video, for browsing, etc.), thus reflecting different aspects of interest to the network administrator. For example, the health status score may indicate a network throughput health status, a roaming health status, interference health status, or the like, regarding the monitored network); and generating one or more recommendations for updating the enterprise architecture for the first enterprise network to modify the cause of the change in the network health score (paragraph 85: the network assurance system may offer suggested solutions to the administrator via effect 604, such as by adjusting the transmission strengths of the AP and one of its neighbors, so as to prevent clients from switching back and forth between the APs). Claims 7/14/20: Mermoud further teaches wherein generating the network health score for each of the plurality of enterprise networks comprises generating an overall network health score for each of the plurality of enterprise networks based on a plurality of sub-network health scores (paragraphs 61 and 67-68: Specifically, according to one or more embodiments of the disclosure as described in detail below, a device receives network metrics regarding networking equipment of a network in a physical location. The device predicts a health status score for the networking equipment in the physical location using the received network metrics as input to a machine learning-based predictive scoring model; may receive the aggregated metrics 410 (M.sub.i) from MAM 406 as input and outputs a predicted health status score, denoted as S.sub.i herein. In various embodiments, the status score S.sub.i can be a scalar or have multiple components (e.g., for on-boarding/roaming experience, for voice, for video, for browsing, etc.). Claims 4-5, 11-12, 18, and 21 are rejected under 35 U.S.C. §103 as unpatentable over Mermoud et al. (US 2019/0028909, hereinafter “Mermoud”) in view of Chen (US Patent No. 10,536,389) in view of Kaplunov et al. (US 2019/0036789, hereinafter “Kaplunov”), as applied to claims 1, 8, and 15 above, and further in view of Abu el Ata et al. (US 2006/0241931, hereinafter “Abu”). Claims 4/11/18: Although Mermoud teaches using the machine learning models (paragraphs 12, 37-41, 48-54, and 60), Mermoud, Chen, and Kaplunov do not explicitly teach generating the enterprise architecture for the first enterprise network comprises: identifying, using the [machine learning] model, a subset of enterprise networks from among the plurality of enterprise networks with a same category as the first enterprise network; comparing the first enterprise network to the subset of enterprise networks to identify at least one enterprise network, the comparison being based on one or more parameters for generating the enterprise architecture for the first enterprise network; and generating the enterprise architecture for the first enterprise network based on the enterprise architecture of the identified at least one enterprise network. Abu teaches wherein generating the enterprise architecture for the first enterprise network comprises: identifying, using the [machine learning] model, a subset of enterprise networks from among the plurality of enterprise networks with a same category as the first enterprise network (paragraphs 8-9: e.g., Embodiments of the invention provide an automated system and method for defining and analyzing enterprise architectures. In particular, the present invention models service architectures and cost architectures of enterprise information systems; Given a current state ("situation") of the enterprise information system architecture, the table provides an indication of remedies predefined by the mathematical model, that is modifications, corrections and/or optimizations to the IS architecture to achieve target performance and meet enterprise requirements); comparing the first enterprise network to the subset of enterprise networks to identify at least one enterprise network, the comparison being based on one or more parameters for generating the enterprise architecture for the first enterprise network; and generating the enterprise architecture for the first enterprise network based on the enterprise architecture of the identified at least one enterprise network (paragraphs 8 and 41: e.g., Embodiments of the invention provide an automated system and method for defining and analyzing enterprise architectures; Step 37 constructs the three dimensional (business, service and cost) enterprise model of model construction module 30. In one embodiment, step 37 combines the business architecture, service architecture and cost architecture parameters and definitions from steps 31, 33 and 35 into a full enterprise dynamic model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Mermoud/Chen/Kaplunov with Abu because the references are analogous since each is directed to computer-implemented features for managing the performance of computer architecture, which is within applicant’s field of endeavor of generating enterprise architectures, and because modifying Mermoud/Chen/Kaplunov to incorporate the teachings of Abu, in the manner claimed, would serve the motivation to optimize the network architecture and achieve target performance to meet enterprise requirements (Abu at paragraph 9), such as to ensure healthy network performance as noted by Mermoud (Mermoud at paragraph 3); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 5/12: Mermoud does not teach the limitation of claims 5/12. However, Abu further teaches wherein the one or more parameters comprises a budget parameter, a priority parameter, a geographic parameter, and a complexity parameter (paragraph 23: Performance criteria and service and cost criteria as dictated or otherwise influenced by corporate layer 13 are also defined; See also, paragraph 62). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further include, in the combination of Mermoud/Chen/Kaplunov/Abu, Abu’s parameter such as a budget parameter, as claimed, in pursuit of achieving cost improvement pursuant to making system redesign decisions (Abu at paragraph 6); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 21: Mermoud further teaches wherein: training the machine learning model comprises training the machine learning model to categorize each of the plurality of enterprise networks (paragraphs 25, 37, 54, 60, 79-82 and Figs. 5A-C 25 and 54: e.g., set of machine learning processes that work in conjunction with one another…allows network assurance system 300 to operate in real-time and constantly learn and adapt to new network conditions and traffic characteristics; system 300 may constantly evolve according to the captured data/observations from the network; may leverage machine learning, such as a regression model, to apply learned and adaptive health status thresholds to the monitored network as part of a predictive scoring model; visualize the health status of the network across different points in time and/or different categories of health statuses), generating the enterprise architecture for the first enterprise network (paragraphs 25 and 56: controller 316 may instruct an endpoint device, networking device in branch office 306 or campus 308, or a network service or control plane function 310, to adjust its operations (e.g., by signaling an endpoint to use a particular AP 320 or 328, etc.)); updating the network health score for each of the plurality of enterprise networks based on continuously receiving historical information (paragraphs 46, 60, and 73: e.g., model may be adjusted over time based on feedback from the monitored network, users, and or administrators, thereby refining over time what is considered healthy vs. unhealthy; scoring model of SPM 408 will be updated using explicit feedback 414 during its next retraining period, to improve its health score predictions); and training the machine learning model based on the continuously received historical information and the updated network health scores (paragraphs 54, 60, and 73: e.g., techniques herein may leverage machine learning, such as a regression model, to apply learned and adaptive health status thresholds to the monitored network as part of a predictive scoring model. Such a model may be adjusted over time based on feedback from the monitored network, users, and or administrators, thereby refining over time what is considered healthy vs. unhealthy; scoring model of SPM 408 will be updated using explicit feedback 414 during its next retraining period, to improve its health score predictions). Mermoud does not teach identifying, using the machine learning model, a subset of enterprise networks from among the plurality of enterprise networks with a same category as the first enterprise network; comparing the first enterprise network to the subset of enterprise networks to identify at least one enterprise network, the comparison being based on one or more parameters for generating the enterprise architecture for the first enterprise network; and generating the enterprise architecture for the first enterprise network based on the enterprise architecture of the identified at least one enterprise network. Abu teaches identifying, using the machine learning model, a subset of enterprise networks from among the plurality of enterprise networks with a same category as the first enterprise network (paragraphs 8-9: e.g., Embodiments of the invention provide an automated system and method for defining and analyzing enterprise architectures. In particular, the present invention models service architectures and cost architectures of enterprise information systems; Given a current state ("situation") of the enterprise information system architecture, the table provides an indication of remedies predefined by the mathematical model, that is modifications, corrections and/or optimizations to the IS architecture to achieve target performance and meet enterprise requirements); comparing the first enterprise network to the subset of enterprise networks to identify at least one enterprise network, the comparison being based on one or more parameters for generating the enterprise architecture for the first enterprise network; and generating the enterprise architecture for the first enterprise network based on the enterprise architecture of the identified at least one enterprise network (paragraphs 8 and 41: e.g., Embodiments of the invention provide an automated system and method for defining and analyzing enterprise architectures; Step 37 constructs the three dimensional (business, service and cost) enterprise model of model construction module 30. In one embodiment, step 37 combines the business architecture, service architecture and cost architecture parameters and definitions from steps 31, 33 and 35 into a full enterprise dynamic model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Mermoud/Chen/Kaplunov with Abu because the references are analogous since each is directed to computer-implemented features for managing the performance of computer architecture, which is within applicant’s field of endeavor of generating enterprise architectures, and because modifying Mermoud/Chen/Kaplunov to incorporate the teachings of Abu, in the manner claimed, would serve the motivation to optimize the network architecture and achieve target performance to meet enterprise requirements (Abu at paragraph 9), such as to ensure healthy network performance as noted by Mermoud (Mermoud at paragraph 3); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Vasseur et al. (US Patent No. 10,680,889): discloses network configuration change analysis using machine learning. Le et al. (US 2019/0199589): discloses features of a network equipment operation adjustment system, including training a machine learning model to identify network equipment and to reconfigure parameters related thereon (at least paragraphs 35-38). NetQoS: NetQoS Delivers Customised Network and Application Performance Data across IT Silos; Network Management Software Suite Enhances Performance Analysis and Reporting for Personnel Responsible for Application Delivery. Anonymous. M2 Presswire [Coventry] 19 May 2009: discloses a computer-implemented solution for monitoring network performance, e.g., network traffic analysis, device performance management, and long-term packet capture in order to troubleshoot problems and implement changes. Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Timothy A. Padot whose telephone number is 571.270.1252. The Examiner can normally be reached on Monday-Friday, 8:30 - 5:30. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Brian Epstein can be reached at 571.270.5389. The fax phone number for the organization where this application or proceeding is assigned is 571- 273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /TIMOTHY PADOT/ Primary Examiner, Art Unit 3625 03/13/2026
Read full office action

Prosecution Timeline

Dec 02, 2021
Application Filed
Aug 25, 2023
Non-Final Rejection — §101, §103
Jan 02, 2024
Response after Non-Final Action
Jan 02, 2024
Response Filed
Mar 19, 2024
Response Filed
Apr 04, 2024
Final Rejection — §101, §103
Jul 09, 2024
Response after Non-Final Action
Jul 11, 2024
Response after Non-Final Action
Sep 12, 2024
Response after Non-Final Action
Oct 09, 2024
Request for Continued Examination
Oct 09, 2024
Response after Non-Final Action
Oct 10, 2024
Response after Non-Final Action
Nov 21, 2024
Non-Final Rejection — §101, §103
Feb 26, 2025
Response Filed
Mar 17, 2025
Final Rejection — §101, §103
Apr 11, 2025
Interview Requested
Apr 17, 2025
Response after Non-Final Action
Jun 10, 2025
Request for Continued Examination
Jun 17, 2025
Response after Non-Final Action
Jun 27, 2025
Non-Final Rejection — §101, §103
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Sep 29, 2025
Response Filed
Oct 20, 2025
Final Rejection — §101, §103
Dec 10, 2025
Response after Non-Final Action
Jan 14, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586094
AUTOMATIC EXPERIENCE RESEARCH WITH A USER PERSONALIZATION OPTION METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12586111
TRANSACTION AND RECEIPT BASED ALERT AND NOTIFICATION SYSTEM AND TECHNIQUES
2y 5m to grant Granted Mar 24, 2026
Patent 12586118
SYSTEMS AND METHODS FOR SURPRISE OBJECT DISTRIBUTION
2y 5m to grant Granted Mar 24, 2026
Patent 12561631
WORK MANAGEMENT SYSTEM, CALIBRATION WORK MANAGEMENT SERVER, AND CALIBRATION WORK MANAGEMENT METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12548037
Forward Context Browsing
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
39%
Grant Probability
67%
With Interview (+28.1%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 562 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month