DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
Observation 1: Applicant introduces in claims 23, and 33 the term “configuration data”. It is unclear whether this configuration data is referring to the same configuration data in claim 21. In this claim there is no 112(b). However, examiner recommends the applicant to provide clarification.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 29 and 39 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention.
Regarding Claim 29: The claim recites “to the compute node, configuration data for the containerized …”. The term “configuration data” was introduced earlier in claim 21. Then the claim 29 recites “the configuration data”. It is unclear whether the term “configuration data” recited in claim 29 refers to the term configuration data in claim 21 or the one introduced in claim 29. This would render the claim indefinite. For purpose of examination, examiner interprets the limitation “to reconcile the state of the configuration object to the containerized routing protocol process, the processing circuitry is configured to send, to the compute node, configuration data for the containerized routing protocol process, the configuration data for the containerized routing protocol process generated from the configuration object and specifying the Internet Protocol address for the second network router as a neighbor address for the first network router.” in claim 29 as “to reconcile the state of the configuration object to the containerized routing protocol process, the processing circuitry is configured to send, to the compute node, the configuration data for the containerized routing protocol process, the configuration data for the containerized routing protocol process generated from the configuration object and specifying the Internet Protocol address for the second network router as a neighbor address for the first network router.”
Regarding Claim 39: The claim recites “to the compute node, configuration data for the containerized …”. The term “configuration data” was introduced earlier in claim 31. Then the claim 39 recites “the configuration data”. It is unclear whether the term “configuration data” recited in claim 39 refers to the term configuration data in claim 31 or the one introduced in claim 39. This would render the claim indefinite. For purpose of examination, examiner interprets the limitation “reconciling the state of the configuration object to the containerized routing protocol process comprises sending, to the compute node, configuration data for the containerized routing protocol process, the configuration data for the containerized routing protocol process generated from the configuration object and specifying the Internet Protocol address for the second network router as a neighbor address for the first network router.” in claim 39 as “reconciling the state of the configuration object to the containerized routing protocol process comprises sending, to the compute node, the configuration data for the containerized routing protocol process, the configuration data for the containerized routing protocol process generated from the configuration object and specifying the Internet Protocol address for the second network router as a neighbor address for the first network router.”
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 21, 23-25, 30-31, 33-35, 40 are rejected under 35 U.S.C. 102(a1) as being anticipated by Maurya et al. (“US 20220035651 A1”, Maurya) hereinafter Maurya.
Regarding claim 21: Maurya teaches a controller (Fig. 1 SDN manager and SDN controller) comprising:
processing circuitry having access to storage media, the processing circuitry configured to ([0089] one or more processor to perform actions):
receive a request for a custom resource for a network router ([0029] API requests identify (1) a set of machines to deploy and/or modify in the set of machines, (2) a set of network elements to connect to the set of machines, or (3) a set of service machines to perform services for the set of machines)([0037-0038] The API processing server 140 parses each received intent-based API request into one or more individual requests. When the requests relate to the deployment of machines, the API server 140 provides these requests directly to the compute managers and controllers 117,)([0041-0042])([0039] The SDN manager cluster 110 directs the SDN controller cluster 115 to configure the network elements to implement the desired forwarding elements and/or service elements (e.g., logical forwarding elements and logical service elements) of one or more logical networks. As further described below, the SDN controller cluster 115 interacts with local controllers on host computers and edge gateways to configure the network elements in some embodiments.)([0031] use Custom Resource Definitions (CRDs) to define additional networking constructs and policies that complement the Kubernetes native resources)([0067] custom resource definitions)([0035] the control system 100 uses one or more CRDs that define attributes of custom-specified network resources that are referred to by the received API requests)([0039-0040] [0042] routers bridges and switches and other network elements, Fig. 2),
wherein the request includes configuration data for a configuration object that is an instance of the custom resource for the network router ([0040-0042] Fig. 1, Fig. 2, The API server 140, in some embodiments, provides the CRDs that have been defined for extended network constructs to the NPA 145 for it to process the APIs that refer to the corresponding network constructs. The API server 140 also provides configuration data from a configuration storage to the NPA 145. The configuration data, in some embodiments, includes parameters that adjust pre-defined template rules that the NPA 145 follows to perform its automated processes. The NPA 145 performs these automated processes to execute the received API requests in order to direct the SDN manager cluster 110 to deploy or configure the network elements for the VPC)([0044] After receiving the APIs from the NPAs 145, the SDN managers 110 in some embodiments direct the SDN controllers 115 to configure the network elements to implement the network state expressed by the API calls.), and
wherein the custom resource for the network router is exposed using an interface of the controller (Fig. 1, Fig. 2, API interfaces between SDN manager/SDN controller and Host computer/Router)(Fig. 2, elements 205 are host computers with Virtual routers element 255); and
reconcile a state of the configuration object to a containerized routing protocol process executing at a compute node (Fig. 1, Fig. 2, Host computer) to cause the containerized routing protocol process to implement a control plane ([0043-0044] the SDN controllers 115 serve as the central control plane (CCP) of the control system 100, the SDN controllers 115 push the high-level configuration data to the local control plane (LCP) agents 220 on host computers 205, LCP agents 225 on edge appliances 210, and TOR (top-of-rack) agents 230 of TOR switches 215.) of the network router (Fig. 2, Virtual routers 255 within host computer 205) based on the configuration data ([0043-0044] Fig. 1, Fig. 2, After receiving the APIs from the NPAs 145, the SDN managers 110 in some embodiments direct the SDN controllers 115 to configure the network elements to implement the network state expressed by the API calls. In some embodiments, the SDN controllers 115 serve as the central control plane (CCP) of the control system 100. FIG. 2 depicts the SDN controllers 115 acting as the CCP computing high-level configuration data (e.g., port configuration, policies, forwarding tables, service tables, etc.). In such capacity, the SDN controllers 115 push the high-level configuration data to the local control plane (LCP) agents 220 on host computers 205, LCP agents 225 on edge appliances 210, and TOR (top-of-rack) agents 230 of TOR switches 215.).
Regarding claim 23: Maurya teaches the controller of claim 21,
Maurya teaches wherein to reconcile the state of the configuration object to the containerized routing protocol process, the processing circuitry is configured to: send, to the compute node, configuration data for the containerized routing protocol process and generated from the configuration object ([0043-0044] Fig. 1, Fig. 2, After receiving the APIs from the NPAs 145, the SDN managers 110 in some embodiments direct the SDN controllers 115 to configure the network elements to implement the network state expressed by the API calls. In some embodiments, the SDN controllers 115 serve as the central control plane (CCP) of the control system 100. FIG. 2 depicts the SDN controllers 115 acting as the CCP computing high-level configuration data (e.g., port configuration, policies, forwarding tables, service tables, etc.). In such capacity, the SDN controllers 115 push the high-level configuration data to the local control plane (LCP) agents 220 on host computers 205, LCP agents 225 on edge appliances 210, and TOR (top-of-rack) agents 230 of TOR switches 215.).
Regarding claim 24: Maurya teaches the controller of claim 21, wherein:
Maurya teaches the controller further comprises a configuration node (Fig. 1, Fig. 2, SDN manager and SDN controller) configured for execution by the processing circuitry, and
the configuration node includes a custom application programming interface (API) server to process requests for operations on custom resources for software-defined networking (SDN) architecture configuration, the custom resources including the custom resource for the network router ([0043-0044] Fig. 1, Fig. 2, After receiving the APIs from the NPAs 145, the SDN managers 110 in some embodiments direct the SDN controllers 115 to configure the network elements to implement the network state expressed by the API calls. In some embodiments, the SDN controllers 115 serve as the central control plane (CCP) of the control system 100. FIG. 2 depicts the SDN controllers 115 acting as the CCP computing high-level configuration data (e.g., port configuration, policies, forwarding tables, service tables, etc.). In such capacity, the SDN controllers 115 push the high-level configuration data to the local control plane (LCP) agents 220 on host computers 205, LCP agents 225 on edge appliances 210, and TOR (top-of-rack) agents 230 of TOR switches 215.) ([0029] API requests identify (1) a set of machines to deploy and/or modify in the set of machines, (2) a set of network elements to connect to the set of machines, or (3) a set of service machines to perform services for the set of machines)([0037-0038] The API processing server 140 parses each received intent-based API request into one or more individual requests. When the requests relate to the deployment of machines, the API server 140 provides these requests directly to the compute managers and controllers 117,)([0041-0042])([0039] The SDN manager cluster 110 directs the SDN controller cluster 115 to configure the network elements to implement the desired forwarding elements and/or service elements (e.g., logical forwarding elements and logical service elements) of one or more logical networks. As further described below, the SDN controller cluster 115 interacts with local controllers on host computers and edge gateways to configure the network elements in some embodiments.)([0031] use Custom Resource Definitions (CRDs) to define additional networking constructs and policies that complement the Kubernetes native resources)([0067] custom resource definitions)([0035] the control system 100 uses one or more CRDs that define attributes of custom-specified network resources that are referred to by the received API requests)([0039-0040] [0042] routers bridges and switches and other network elements, Fig. 2),.
Regarding claim 25: Maurya teaches the controller of claim 21,
Maurya teaches wherein the controller is implemented by an orchestration system cluster comprising the compute node (Fig. 1, Fig. 2, system 100, with host computer nodes)([0030-0031] cluster of worker nodes and cluster of control plan nodes)([0036, 0038-0042, 0055-0056, 0061-0073]).
Regarding claim 30: Maurya teaches the controller of claim 21, wherein the processing circuitry is configured to:
Maurya teaches receive a subscription request for the custom resource for the network router ([0029] Fig. 1, configuring the network plugin agents to receive notifications includes registering the network plugin agents with an API (Application Programming Interface) processor that receives intent-based API requests, and parses these API requests to identify (1) a set of machines to deploy and/or modify in the set of machines, (2) a set of network elements to connect to the set of machines, or (3) a set of service machines to perform services for the set of machines. In some embodiments, the API is a hierarchical document that can specify multiple different compute and/or network elements at different levels of a compute and/or network element hierarchy.)([0036, 0038-0044]), and
reconcile the state of the configuration object to the containerized routing protocol process based on the subscription request ([0038-0044] After receiving the APIs from the NPAs 145, the SDN managers 110 in some embodiments direct the SDN controllers 115 to configure the network elements to implement the network state expressed by the API calls. In some embodiments, the SDN controllers 115 serve as the central control plane (CCP) of the control system 100. FIG. 2 depicts the SDN controllers 115 acting as the CCP computing high-level configuration data (e.g., port configuration, policies, forwarding tables, service tables, etc.). In such capacity, the SDN controllers 115 push the high-level configuration data to the local control plane (LCP) agents 220 on host computers 205, LCP agents 225 on edge appliances 210, and TOR (top-of-rack) agents 230 of TOR switches 215.).
Regarding claim 31: claim 31 is rejected with the same reasoning as claim 21.
Regarding claim 33: claim 33 is rejected with the same reasoning as claim 23.
Regarding claim 34: claim 34 is rejected with the same reasoning as claim 24.
Regarding claim 35: claim 35 is rejected with the same reasoning as claim 25.
Regarding claim 40: claim 40 is rejected with the same reasoning as claim 21.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 22, 26, 32, 36 are rejected under 35 U.S.C. 103 as being un-patentable by Maurya et al. (“US 20220035651 A1”, Maurya) hereinafter Maurya, in view of Chen (“US 10708125 B1”, Chen) hereinafter Chen.
Regarding claim 22: Maurya teaches the controller of claim 21, wherein:
Maurya does not explicitly teach, but Chen teaches
the network router comprises a Border Gateway Protocol (BGP) router ([Col. 11 Lines 48-65][Col. 12 Line 1-18] Fig. 2, The border network 214 can forward packets between the compute service provider 210 and the co-location center 240 via the private network 250. The border network 214 can encompass multiple geographic areas. The border network 214 can communicate routing information with the client gateway 242 using a Border Gateway Protocol (BGP), such as internal BGP (iBGP). BGP is a TCP-based application layer protocol that is used to share routing information. Generally, iBGP peers are connected as a full mesh. However, when the border network 214 encompasses multiple geographic areas, a full mesh can potentially cause scalability issues. However, the border network 214 can include route reflectors to potentially increase the scalability of the border network 214. the border network 214 can include route reflectors to potentially increase the scalability of the border network 214. A route reflector (e.g., a network device such as an iBGP router) can advertise or reflect any routes it learned from one BGP router to other BGP peers within an autonomous system (AS). The border network 214 can be configured as a single AS. The route reflectors can be organized hierarchically in tiers so that route reflectors perform mutual reflection), and
the containerized routing protocol process implements decentralized route sharing with another containerized routing protocol process executing at another compute node ([Col. 11 Lines 48-65][Col. 12 Line 1-18] Fig. 2, The border network 214 can forward packets between the compute service provider 210 and the co-location center 240 via the private network 250. The border network 214 can encompass multiple geographic areas. The border network 214 can communicate routing information with the client gateway 242 using a Border Gateway Protocol (BGP), such as internal BGP (iBGP). BGP is a TCP-based application layer protocol that is used to share routing information. Generally, iBGP peers are connected as a full mesh. However, when the border network 214 encompasses multiple geographic areas, a full mesh can potentially cause scalability issues. However, the border network 214 can include route reflectors to potentially increase the scalability of the border network 214. the border network 214 can include route reflectors to potentially increase the scalability of the border network 214. A route reflector (e.g., a network device such as an iBGP router) can advertise or reflect any routes it learned from one BGP router to other BGP peers within an autonomous system (AS). The border network 214 can be configured as a single AS. The route reflectors can be organized hierarchically in tiers so that route reflectors perform mutual reflection).
It would have been obvious to a person skilled in the art, before the effective filing date of the invention, to modify Maurya in view of Chen in order to have BGP router and share the routing data between network elements because BGP supports sharing routing data among different network elements utilizing TCP based application protocol and it would help increase the scalability of border network that is connecting different networks with each other (Chen [Col. 11 Lines 48-65])
Regarding claim 26: Maurya teaches the controller of claim 21,
Maurya does not explicitly teach, but Chen teaches
wherein the custom resource for the network router comprises a BGPRouter resource ([Col. 11 Lines 48-65][Col. 12 Line 1-18] Fig. 2, The border network 214 can forward packets between the compute service provider 210 and the co-location center 240 via the private network 250. The border network 214 can encompass multiple geographic areas. The border network 214 can communicate routing information with the client gateway 242 using a Border Gateway Protocol (BGP), such as internal BGP (iBGP). BGP is a TCP-based application layer protocol that is used to share routing information. Generally, iBGP peers are connected as a full mesh. However, when the border network 214 encompasses multiple geographic areas, a full mesh can potentially cause scalability issues. However, the border network 214 can include route reflectors to potentially increase the scalability of the border network 214. the border network 214 can include route reflectors to potentially increase the scalability of the border network 214. A route reflector (e.g., a network device such as an iBGP router) can advertise or reflect any routes it learned from one BGP router to other BGP peers within an autonomous system (AS). The border network 214 can be configured as a single AS. The route reflectors can be organized hierarchically in tiers so that route reflectors perform mutual reflection).
It would have been obvious to a person skilled in the art, before the effective filing date of the invention, to modify Maurya in view of Chen in order to have BGP router and share the routing data between network elements because BGP supports sharing routing data among different network elements utilizing TCP based application protocol and it would help increase the scalability of border network that is connecting different networks with each other (Chen [Col. 11 Lines 48-65])
Regarding claim 32: claim 32 is rejected with the same reasoning as claim 22.
Regarding claim 36: claim 36 is rejected with the same reasoning as claim 26.
Allowable Subject Matter
Claims 27-29, and 37-39 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is the reason for allowable subject matter in claims 27 and 37.
With regards to claims 27 and 37, the prior arts (“US 20220035651 A1”, Maurya) (“US 10708125 B1”, Chen) do not teach different custom resources and the first custom resource relating to two different network routers and the first custom resource is pointing to the second custom resource, and matching the state of configuration of the second router based on the second custom resource.
Thus, no other prior art of record fairly teaches or suggests the instant claims as a whole.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FADI HAJ SAID whose telephone number is (571)272-2833. The examiner can normally be reached on 8:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Follansbee can be reached on 571-272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FADI HAJ SAID/Primary Examiner, Art Unit 2444