Prosecution Insights
Last updated: April 19, 2026
Application No. 18/109,208

MANAGING AN ENCRYPTED CONNECTION WITH A CLOUD SERVICE PROVIDER

Non-Final OA §103
Filed
Feb 13, 2023
Examiner
LEE, MICHAEL M
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Oracle International Corporation
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
217 granted / 259 resolved
+25.8% vs TC avg
Strong +44% interview lift
Without
With
+44.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
286
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 259 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Response to Amendments This Office Action is in response to the Request for Continued Examination and amendment filed 11/13/2025. In the amendment, claims 1, 8, 11, 18, 20 have been amended. Claims 1-20 are pending and considered. Response to Arguments Applicant’s argument, see pages 7-8 of the Remarks filed 11/13/2025 with respect to claims rejected under 35 USC 103 over prior arts of record has been fully considered but asserted not persuasive due to following reason. Regarding independent claim 1 (similarly for claims 11, 20), examiner acknowledges applicant amended the claims by adding limitation(s) underlined reciting “wherein one or more parameters that include a bandwidth to be supported by the communication tunnel are used to implement the communication tunnel at the on-premise tunnel endpoint”, “wherein the one or more updated parameters include an updated bandwidth for the communication tunnel”, inter alia. Applicant specifically argued, see page 8 of the Remarks, the cited references, in particular, Hoy fails to teach the amended claim limitations. In particular, applicant argued, “The Hoy reference, however, only teaches logging a bandwidth history by a VPN agent. For instance, at [0076] Hoy recties that ‘[f]or example, the bandwidth used by the applications, including bandwidth history, may be logged by the VPN agents’ and at [0078], Hoy recites that as ‘the application topology changes and machines get moved, the VPN Manager will instruct the VPN agents to reconfigure, merge and split VPNs as needed.’ Merely teaching logging bandwidth by a VPN agent does not teach, suggest, or describe ‘determining, ... one or more updated parameters ... [that] include an updated bandwidth for the communication tunnel’, and then ‘transmitting ... the updated bandwidth.’” Examiner acknowledges applicant’s perspective, however respectively disagrees. Regarding the amended limitation concerning the parameters that include bandwidth for supporting communication tunnel, Hoy specifically teaches fulfilling VPN infrastructure change indicated by the throughput measurements by the VPN agents. See e.g., [0109], [0164], [0166]. Examiner asserts Hoy not only teach “logging a bandwidth history by a VPN agent”, but also teach updating VPN infrastructure with bandwidth requirement. See the updated claim rejection below for details. For the above reasons, the claim rejections under 35 USC 103 is maintained and updated as presented below. Applicant is encouraged to include innovative features into claims to advance the case. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Examiner Notes Examiner cites particular paragraphs, columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claims 1-3, 5-13, 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Klimentiev et al (US20100169497A1, hereinafter, “Klimentiev”), in view of Hoy et al (US20170171158A1, hereinafter, “Hoy”). Regarding claim 1, Klimentiev teaches: A computer-implemented method (Klimentiev, discloses systems and methods for integrating local systems with cloud computing resources by implementing tunnel agent client and tunnel agent server to establish secure connection point, see [Abstract]), comprising: [monitoring], by a computer system within a cloud service provider infrastructure, a communication tunnel established between an on-premise tunnel endpoint within a customer on-premise environment and a cloud-side tunnel endpoint within the cloud service provider infrastructure, wherein one or more parameters [that include a bandwidth to be supported by the communication tunnel] are used to implement the communication tunnel at the on-premise tunnel endpoint (See e.g., Fig. 3, and [0024] Configuration data 316 (i.e., one or more parameters) may also include information for creating a tunnel agent server software component 317. Tunnel agent server 317 may be used for establishing a secure connection between the server instance 318 and the local system 300A); determining, by the computer system within the cloud service provider infrastructure, [one or more updated parameters] to the one or more parameters used by the on-premise tunnel endpoint to implement the communication tunnel, [wherein the one or more updated parameters include an updated bandwidth for the communication tunnel] (e.g., [0026] For example, the cloud controller (i.e., computer system within the cloud service provider infrastructure, further see e.g., Fig. 3) may receive an internet protocol address (e.g., ec2-xxx-xxx-xxx-xxx. cloudnetwork.com) for accessing the new resources. In one embodiment, cloud controller 311 generates new security data to provide secure communications between the tunnel agent server 317, server instance 318, and local systems such as client 313 (i.e., on-premise tunnel endpoint). The new security data is created by cloud controller 311 and a copy of the security data is sent to both a tunnel agent client 314 and tunnel agent server 317. And [0028] A cloud controller may compare this information to a stored list of users that are authorized to create new resources on the cloud, for example. If the username and password are found on the list, cloud controller authorizes the user at 420. At 430, cloud controller sends the request to a cloud management service specifying an image to instantiate. The configuration data for the image may be retrieved from a database. At 440, a server instance of the specified resource is created. A tunnel agent server is also created to interface with the local system); and transmitting, by the computer system within the cloud service provider infrastructure to an agent implemented within the customer on-premise environment, one or more instructions to be implemented by the agent to update the one or more parameters used by the on-premise tunnel endpoint to the [one or more updated] parameters, [including the updated bandwidth], to implement the communication tunnel (e.g., [0028] At 470, the new shared security information is sent back to the tunnel agent client (i.e., agent) on the local system. And [0037] Configuration data for the client may be maintained by the controller and forwarded (i.e., transmitting) to the tunnel agent client when it connects to the controller, for example. The tunnel agent client may store the configuration information locally, and may be automatically updated when it connects to the controller). (See Hoy below for teachings of limitation(s) in brackets above) While Klimentiev teaches the main concept of the claimed invention, but does not specifically teach the following, in the same field of endeavor Hoy teaches: monitoring, …, a communication tunnel established between an on-premise tunnel endpoint within a customer on-premise environment and a cloud-side tunnel endpoint within the cloud service provider infrastructure (Hoy, discloses method for managing VPN tunnels in hybrid cloud environments, see [Abstract]. See Fig. 4, in particular step 413, and [0076] By event and traffic logging, the VPN Manager (i.e., cloud-side tunnel endpoint within the cloud service provider infrastructure, computer system within a cloud service provider infrastructure) provides visibility into the VPN configuration, shown as step 413. And see Fig. 9 at 901, [0163] The process begins in step 901, with the VPN Manager monitoring the existing VPN tunnels (i.e., monitoring)), wherein one or more parameters that include a bandwidth to be supported by the communication tunnel are used to implement the communication tunnel at the on-premise tunnel endpoint (e.g., [0075] the application can provide a VPN filter plugin to the VPN Manager. The VPN filter will be used to filter traffic passing over the VPN tunnel...The VPN filter on a particular VPN tunnel can be modified in response to requests from other applications which share the communication bandwidth of the VPN tunnel. And [0109] The steps described above provide an illustrative example of the initial configuration of the VPN tunnels. As will be discussed below, as the needs of the applications change during their lifecycle, a well behaved application can issue further requests to change the parameters needed for the tunnel. For example, during deployment or other periods of high usage, the application will require a high bandwidth through the tunnel); determining, …, one or more updated parameters to the one or more parameters used by the on-premise tunnel endpoint to implement the communication tunnel, wherein the one or more updated parameters include an updated bandwidth for the communication tunnel; to update the one or more parameters used by the on-premise tunnel endpoint to the one or more updated parameters, including the updated bandwidth (e.g., [0078] Also as depicted as step 417, the VPN tunnels may need to be reconfigured … As the application topology changes and machines get moved, the VPN Manager will instruct the VPN agents to reconfigure. And further refer to Fig. 9, and [0166] In step 907, a determination whether the change (i.e., update) can be accomplished with the existing set of VPN tunnels… A comparison of the requested bandwidth, current bandwidth and available bandwidth of the current tunnel is made. A comparison of the existing and requested security parameters is made to determine the compatibility of the security parameters requested by the respective applications. Examiner notes, Hoy’s teaching of VPN reconfiguring suggests updating parameters to Klimentiev’s security configuration data, i.e., updated parameters); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Hoy in the integrating local systems with cloud computing resources of Klimentiev by managing VPN tunnels and optimizing traffic flow with VPN agents. This would have been obvious because the person having ordinary skill in the art would have been motivated to provide visibility into VPN configuration to further optimize traffic flow and/or reconfigure, split and merge VPNs as needed for secure communication between two environments (Hoy, [Abstract]). Regarding claim 11, claim 11 is a system claim that encompasses limitations similar to those limitations of the method claim 1. Therefore, claim 11 is rejected with the same rationale and motivation as applied against claim 1. In addition, Klimentiev teaches a system comprising: one or more processors, a non-transitory computer-readable medium storing a set of instructions (Klimentiev, discloses systems and methods for integrating local systems with cloud computing resources by implementing tunnel agent client and tunnel agent server to establish secure connection point, see [Abstract]. And Fig. 10 Processor 1001 and [0081] computer readable mediums). Regarding claim 20, claim 20 is a computer-readable medium claim that encompasses limitations similar to those limitations of the method claim 1. Therefore, claim 20 is rejected with the same rationale and motivation as applied against claim 1. In addition, Klimentiev teaches a non-transitory computer-readable medium storing a set of instructions, the set of instructions when executed by one or more processors (Klimentiev, discloses systems and methods for integrating local systems with cloud computing resources by implementing tunnel agent client and tunnel agent server to establish secure connection point, see [Abstract]. And Fig. 10 Processor 1001 and [0081] computer readable mediums). Regarding claim 2, similarly claim 12, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, the system of claim 11, Klimentiev further teaches: comprising enabling, by the computer system within the cloud service provider infrastructure, communications between the agent implemented within the customer on-premise environment and the cloud service provider infrastructure (e.g., [0028] At 450, the connection information may be forwarded to the cloud controller. Upon receipt of the connection information, cloud controller generates shared security information (e.g., a shared key) at 460. The shared security information may enable the tunnel agent client and tunnel agent server to communicate with one another in a secure manner... At 470, the new shared security information is sent back to the tunnel agent client on the local system and the shared security information is also sent to the tunnel agent server on the cloud computing system). Regarding claim 3, similarly claim 13, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, the system of claim 11, Klimentiev further teaches: wherein the agent implemented within the customer on-premise environment includes one or more software instances installed within the customer on-premise environment (e.g., [0028] The tunnel agent server may have connection information to support communication with the software components in the local system (e.g., the cloud controller or the tunnel agent client, i.e., the agent)). Regarding claim 5, similarly claim 15, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, the system of claim 11, Klimentiev further teaches: comprising establishing, by the computer system within the cloud service provider infrastructure, the communication tunnel between the on-premise tunnel endpoint within the customer on-premise environment and the cloud-side tunnel endpoint within the cloud service provider infrastructure (e.g., Fig. 9, step 907 (TAC establishes new connection to TAS) to step 912 (TAS routes data using matched connections)). Regarding claim 6, similarly claim 16, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, the system of claim 11, Klimentiev further teaches: wherein the on-premise tunnel endpoint is implemented within a network interface card (NIC) of the customer on-premise environment (e.g., Fig. 5 show that TAC TAS serve as proxy between Client and Server, and Fig. 10 further shows components such as network interface (i.e., NIC)). Regarding claim 7, similarly claim 17, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, the system of claim 11, Hoy further teaches: further teaches: wherein the communication tunnel includes an internet protocol security (IPSEC) virtual private network (VPN) tunnel (Hoy, [0091] That is, the VPN Manager 508 issues instructions and configuration requirements to the VPN agents which the VPN agents (or other existing entities) carry out. As is known to those skilled in the art, the VPN tunnel would use a selected VPN security technology, such as the Internet Protocol security (IPsec), Secure Socket Layer/Transport Layer Security (SSL/TLS), …). Same motivation as presented in claim 1, 11 would apply. Regarding claim 8, similarly claim 18, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, the system of claim 11, Hoy further teaches: wherein the one or more parameters include routing information, one or more keys used for authentication, and an encryption level to be implemented via the communication tunnel (Hoy, [0091] That is, the VPN Manager 508 issues instructions and configuration requirements to the VPN agents which the VPN agents (or other existing entities) carry out... The VPN security requirements in the Machine A could include the selection of the VPN security protocol as well as the capabilities to encrypt data according to different encryption standards such as support data encryption standards (DES)/Triple DES (3DES) and Advanced Encryption Standard (AES) with different key sizes or multicast or group encryption standards). Same motivation as presented in claim 1, 11 would apply. Regarding claim 9, similarly claim 19, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, the system of claim 11, Klimentiev further teaches: wherein the one or more instructions are sent from an agent implemented within the cloud service provider infrastructure directly to the agent implemented within the customer on-premise environment (e.g., [0028] At 440, a server instance of the specified resource is created. A tunnel agent server (i.e., an agent implemented within the cloud service provider infrastructure) is also created to interface with the local system. The tunnel agent server may have connection information to support communication with the software components in the local system (e.g., the cloud controller or the tunnel agent client (i.e., the agent)) … At 450, the connection information may be forwarded to the cloud controller. Upon receipt of the connection information, cloud controller generates shared security information (e.g., a shared key) at 460. The shared security information may enable the tunnel agent client and tunnel agent server to communicate with one another in a secure manner). Regarding claim 10, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, Hoy further teaches: wherein the one or more instructions are sent from a control plane within the cloud service provider infrastructure via an application program interface (API) implemented within the cloud service provider infrastructure to the agent implemented within the customer on-premise environment (Hoy, [0113] The VPN Manager would expose options to manually add, delete and reconfigure VPN agents and VPN tunnels. In one preferred embodiment, the user interface (UI) would be a UI+REST API interface. The Web based UI will use the REST API or an external application could use the REST API directly). Same motivation as presented in claim 1 would apply. Claims 4, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Klimentiev-Hoy as applied above to claim 1, 11 respectively, further in view of Andrews et al (US20240129310A1, hereinafter, “Andrews”). Regarding claim 4, similarly claim 14, Klimentiev-Hoy combination teaches the computer-implemented method of claim 1, the system of claim 11, While the combination of Klimentiev-Hoy does not specifically teach the following, in the same field of endeavor Andrews teaches: wherein the agent implemented within the customer on-premise environment includes hardware implemented within the customer on-premise environment at a point of manufacture (Andrews, discloses systems and methods of hybrid appliance for zero trust network access to customer application with secure tunnel, see [Abstract]. Also Fig. 5, and [0085] The network device 506 may advantageously incorporate a hardware security system 514 such as a dedicated chip or circuit that stores data for authenticating the network device 506 to other devices, or otherwise securing operation of the network device 506 or verifiably asserting an identity of the network device 406. For example, Trusted Platform Module (TPM) is an international standard for a dedicated hardware cryptoprocessor that specifies an architecture, security algorithms, cryptographic primitives, root keys, authorization standards, and so forth that can be used for authentication. A TPM cryptoprocessor securely stores device-specific key material that is bound to a device at manufacture). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have employed the teachings of Andrews in the integrating local systems with cloud computing resources of Klimentiev-Hoy by implementing security device in hardware that stores device-specific key material bound to manufacture. This would have been obvious because the person having ordinary skill in the art would have been motivated to use hardware based security device from manufacture that advantageously incorporate a hardware security system such as a dedicated chip or circuit that stores data for authenticating the network device (Andrews, [Abstract], [0085]). Citation of References The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited but not been replied upon for this office action: Guo et al (US20040168088A1) discloses method and virtual private network (VPN) system for providing bandwidth guaranteed provisioning in network-based mobile VPN services. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL M LEE whose telephone number is (571)272-1975. The examiner can normally be reached on M-F: 8:30AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL M LEE/Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Feb 13, 2023
Application Filed
Dec 17, 2024
Non-Final Rejection — §103
Mar 24, 2025
Response Filed
May 12, 2025
Final Rejection — §103
Nov 13, 2025
Request for Continued Examination
Nov 22, 2025
Response after Non-Final Action
Jan 14, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596786
ANOMALOUS EVENT AGGREGATION FOR ANALYSIS AND SYSTEM RESPONSE
2y 5m to grant Granted Apr 07, 2026
Patent 12579301
Data Plane Management Systems and Methods
2y 5m to grant Granted Mar 17, 2026
Patent 12580927
DETECTING AND PROTECTING CLAIMABLE NON-EXISTENT DOMAINS
2y 5m to grant Granted Mar 17, 2026
Patent 12579279
System and Method for Summarization of Complex Cybersecurity Behavioral Ontological Graph
2y 5m to grant Granted Mar 17, 2026
Patent 12580938
CONDITIONAL HYPOTHESIS GENERATION FOR ENTERPRISE PROCESS TREES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+44.1%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 259 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month