Prosecution Insights
Last updated: April 19, 2026
Application No. 18/532,325

INFORMATION PROCESSING DEVICE

Final Rejection §103
Filed
Dec 07, 2023
Examiner
DUFFIELD, JEREMY S
Art Unit
2498
Tech Center
2400 — Computer Networks
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
213 granted / 438 resolved
-9.4% vs TC avg
Strong +53% interview lift
Without
With
+53.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
27 currently pending
Career history
465
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
59.9%
+19.9% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 438 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application includes a foreign priority claim to JP2023-021852 filed 15 February 2023. The priority claim complies with all applicable rules and regulations. Therefore, the claims will be examined using an effective filing date of 15 February 2023. Response to Arguments Applicant’s arguments, see pages 9-10, filed 19 November 2025, with respect to the rejection of claim 1 under 35 U.S.C. 103 have been fully considered and are persuasive in light of the new claim amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Gonuguntla et al. (US 2022/0357988 A1), Xu et al. (US 2013/0262556 A1), Li et al. (US 2020/0409754 A1), and Kanna et al. (US 2012/0054768 A1). See the 35 U.S.C. 103 section below for a detailed analysis. Claim Objections Claims 1, 3, 4, and 6 are objected to because of the following informalities: Regarding claim 1, lines 20-21—“a plurality of secure computation methods”, it is unclear as to whether “a plurality of secure computation methods” is referring to “a plurality of secure computation methods” of lines 5-6 or is different. For examination purposes, “a plurality of secure computation methods” of lines 5-6 and 20-21 will be interpreted to be the same. In order to overcome this objection, lines 20-21 may be amended to state --the plurality of secure computation methods--, for example. Regarding claim 3, lines 11-12—“second wait time othe first wait time” may be amended to state --second wait time and the first wait time--, for example, in order to provide the missing word and correct the misspelling. Regarding claim 4, lines 11-12—“operatesexecutes the” may be amended to state --executes the--, for example, in order to correct the grammar issue. Regarding claim 6, line 2—“the control unit configured to”, lacks proper antecedent basis for the claim. For examination purposes, “the control unit configured to” of line 2 will be interpreted to be “the executable instructions further cause the processor to” in order to match the amended claim language of claim 1. In order to overcome this objection, line 2 may be amended to state --the executable instructions further cause the processor to --, for example. Regarding claim 6, line 5—“one more nodes” may be amended to state --one or more nodes--, for example, in order to provide the missing word. Regarding claim 6, lines 7-8—“a first wait time”, it is unclear as to whether “a first wait time” is referring to “a first wait time” of claim 1 or is different. For examination purposes, “a first wait time” of lines 7-8 and claim 1 will be interpreted to be the same. In order to overcome this objection, lines 7-8 may be amended to state --the first wait time --, for example. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 3-6 are rejected under 35 U.S.C. 103 as being unpatentable over Gonuguntla et al. (US 2022/0357988 A1) in view of Xu et al. (US 2013/0262556 A1) in view of Li et al. (US 2020/0409754 A1) and further in view of Kanna et al. (US 2012/0054768 A1). Regarding claim 1, Gonuguntla teaches an information processing device, e.g., resource controller 150 (Fig. 1, el. 150), comprising a processor, e.g., processor(s) 248 (Fig. 5B, el. 248), wherein computing device 500 may be used for the resource controller 150 (Para. 72); and a memory, e.g., non-volatile memory 252; volatile memory 270 (Fig. 5B, el. 252, 270), storing executable instructions that cause the processor, e.g., the non-volatile memory 252 may store an operating system 264, one or more applications 266, and data 268 such that computer instructions of the operating system 264 and/or applications 266 are executed by the processor(s) 248 out of the volatile memory 270 (Para. 73), to: …; assign a transform of a first secure computation method of the plurality of secure computation methods to a first node from among of a plurality of nodes, e.g., a first chipset 115a and a second chipset 115b (Fig. 1, el. 110, 115a, 115b), allocatable for distributed data processing, wherein the first node has a first node attribute corresponding to the first secure computation method, e.g., the encryption and decryption of network traffic between the client device 130 and the one or more servers 120 may be offloaded to the application delivery controller 110, wherein the application delivery controller 110 may include one or more hardware resources, such as one or more cryptographic accelerator chipsets dedicated to the performance of cryptographic operations offloaded to the application delivery controller 110 including a first chipset 115a and a second chipset 115b (Para. 40); the first chipset 115a and the second chipset 115b may be configured to perform a variety of operations, wherein the first chipset 115a and the second chipset 115b may be dedicated to the performance of cryptographic operations such as Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), Data Encryption Standard (DES), Secure Hash Algorithms (SHA), Diffie-Hellman (DH), and/or the like—attributes-- (Para. 43); …, and and the first node further has a second node attribute corresponding to a second secure computation method that is different from the first secure computation method, e.g., the first chipset 115a and the second chipset 115b may be configured to perform a variety of operations, wherein the first chipset 115a and the second chipset 115b may be dedicated to the performance of cryptographic operations such as Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), Data Encryption Standard (DES), Secure Hash Algorithms (SHA), Diffie-Hellman (DH), and/or the like—attributes-- (Para. 43), …, and the first node operates to execute a transform of the second secure computation method before the transform of the first secure computation method is assigned to the first node, e.g., the first chipset 115a and the second chipset 115b may be configured to perform a variety of operations, wherein the first chipset 115a and the second chipset 115b may be dedicated to the performance of cryptographic operations such as Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), Data Encryption Standard (DES), Secure Hash Algorithms (SHA), Diffie-Hellman (DH), and/or the like (Para. 43). Gonuguntla does not clearly teach to: determine a wait time of each secure computation method of a plurality of secure computation methods executed in distributed data processing, the wait time being a time period from when data is ready to be processed until processing of the data starts; sort the plurality of secure computation methods based on a length of the determined wait time of each secure computation method; the first secure computation method having a first wait time that is a longest wait time from among the plurality of secure computation methods, the wait time of each secure computation method of a plurality of secure computation methods is a sum of a plurality of transforms of each secure computation method of the plurality of secure computation methods, and wherein the second secure computation method has a second wait time that is shorter than the first wait time. Xu teaches to: determine a wait time of each secure computation method of a plurality of secure computation methods executed in distributed data processing,…, e.g., a policy corresponding to a head node 112 may be used to select one or more of the plurality of job requests 106 for execution, wherein the head node 112 may select at least one job request having a highest priority or a longest waiting time, and the head node 112 converts the selected ones of the plurality of job requests 106 into corresponding ones of the plurality of resource allocation requests 108 and submits each resource allocation request to a job queue 116 within the cloud computing provider 104 (Fig. 1, el. 112; Para. 23); the computing resource allocations may refer to capability (e.g., fault-tolerance, security)—secure-- (Para. 31); sort the plurality of secure computation methods based on a length of the determined wait time of each secure computation method, e.g., a policy corresponding to a head node 112 may be used to select one or more of the plurality of job requests 106 for execution, wherein the head node 112 may select at least one job request having a highest priority or a longest waiting time—sorting--, and the head node 112 converts the selected ones of the plurality of job requests 106 into corresponding ones of the plurality of resource allocation requests 108 and submits each resource allocation request to a job queue 116 within the cloud computing provider 104 (Figs. 1, 2, el. 112; Para. 23); assign a transform of a first secure computation method of the plurality of secure computation methods to a first node from among of a plurality of nodes, e.g., compute nodes 120; worker nodes 122 (Fig. 1, el. 120, 122), allocatable for distributed data processing, wherein the first node has a first node attribute corresponding to the first secure computation method, e.g., The head node 112 may distribute the first set of tasks amongst one or more of the compute nodes 120 and the second set of tasks amongst one or more of the worker nodes 122, wherein the head node 112 may select a particular node based on resource availability as dictated by the policy, wherein each task generally refers to a discrete unit of a job request that comprises one or more data operations/computations (Para. 24); the administration mechanism 202 may examine a (computer cluster) policy 208 to select one or more job requests to be executed using cloud computing resources, and based on the policy 208, the administration mechanism 202 may select one or more specific cloud computing providers to request resource allocations and assign certain ones of the job requests for execution, wherein the administration mechanism 202 may match a particular job request with a cloud computing provider capable of efficient/expeditious execution, such as the cloud computing provider having a set of resources—attributes-- that meet or surpass the capacities indicated by the job specification data 204 (Fig. 2, el. 202; Para. 34), the first secure computation method having a first wait time that is a longest wait time from among the plurality of secure computation methods, e.g., a policy corresponding to a head node 112 may be used to select one or more of the plurality of job requests 106 for execution, wherein the head node 112 may select at least one job request having a highest priority or a longest waiting time (Para. 23), …, and the first node further has a second node attribute corresponding to a second secure computation method that is different from the first secure computation method, e.g., the administration mechanism 202 may examine a (computer cluster) policy 208 to select one or more job requests to be executed using cloud computing resources, and based on the policy 208, the administration mechanism 202 may select one or more specific cloud computing providers to request resource allocations and assign certain ones of the job requests for execution, wherein the administration mechanism 202 may match a particular job request with a cloud computing provider capable of efficient/expeditious execution, such as the cloud computing provider having a set of resources—attributes-- that meet or surpass the capacities indicated by the job specification data 204 (Fig. 2, el. 202; Para. 34), wherein the second secure computation method has a second wait time that is shorter than the first wait time, e.g., a policy corresponding to a head node 112 may be used to select one or more of the plurality of job requests 106 for execution, wherein the head node 112 may select at least one job request having a highest priority or a longest waiting time (Para. 23), and the first node operates to execute a transform of the second secure computation method before the transform of the first secure computation method is assigned to the first node, e.g., a policy corresponding to a head node 112 may be used to select one or more of the plurality of job requests 106 for execution, wherein the head node 112 may select at least one job request having a highest priority or a longest waiting time (Para. 23); the administration mechanism 202 may examine a (computer cluster) policy 208 to select one or more job requests to be executed using cloud computing resources, and based on the policy 208, the administration mechanism 202 may select one or more specific cloud computing providers to request resource allocations and assign certain ones of the job requests for execution, wherein the administration mechanism 202 may match a particular job request with a cloud computing provider capable of efficient/expeditious execution, such as the cloud computing provider having a set of resources—attributes-- that meet or surpass the capacities indicated by the job specification data 204 (Fig. 2, el. 202; Para. 34). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gonuguntla to include to: determine a wait time of each secure computation method of a plurality of secure computation methods executed in distributed data processing; sort the plurality of secure computation methods based on a length of the determined wait time of each secure computation method; the first secure computation method having a first wait time that is a longest wait time from among the plurality of secure computation methods, and wherein the second secure computation method has a second wait time that is shorter than the first wait time, using the known methods of selecting, by the head node, a job that has the longest wait time, and assigning a task from the selected job to a node for execution, as taught by Xu, in combination with the crypto-accelerator system of Gonuguntla, for the purpose of satisfying actual computational demands of the client computers, (Xu-Para. 5), while also improving job request performance by accurately determining the appropriate node configuration (Xu-Para. 6). Gonuguntla in view of Xu does not clearly teach the wait time being a time period from when data is ready to be processed until processing of the data starts; and the wait time of each secure computation method of a plurality of secure computation methods is a sum of a plurality of transforms of each secure computation method of the plurality of secure computation methods. Li teaches the wait time being a time period from when data is when data is ready to be processed…; and the wait time of each secure computation method of a plurality of secure computation methods is a sum of a plurality of transforms of each secure computation method of the plurality of secure computation methods, e.g., Total queue pending time for each job 362 corresponds to a total amount of time that all program tasks associated with a particular job spent waiting in their respective task queues (Fig. 3, el. 362; Para. 46); examples of a job may include logging into the server computer system, sending information to a different user or entity, performing a risk analysis of a transaction, completing a financial transaction, and the like—secure computation methods-- (Para. 13). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gonuguntla in view of Xu to include the wait time of each secure computation method of a plurality of secure computation methods is a sum of a plurality of transforms of each secure computation method of the plurality of secure computation methods, using the known methods of determining a total queue pending time for each job that corresponds to a total amount of time that all program tasks associated with a particular job spent waiting in their respective task queues, as taught by Li, in combination with the crypto-accelerator system of Gonuguntla in view of Xu, for the purpose of providing an indication of how efficiently server computer system is performing each job (Li-Para. 46). Gonuguntla in view of Xu in view of Li does not explicitly teach the wait time being a time period from when data is ready to be processed until processing of the data starts. Kanna teaches the wait time being a time period from when data is ready to be processed until processing of the data starts, e.g., waiting time is a time from a request of execution to each business application 203 by the synthesized workflow group specified in Step S372 to a start of the execution (Para. 254). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gonuguntla in view of Xu in view of Li to include the wait time being a time period from when data is ready to be processed until processing of the data starts, using the known methods of determining a waiting time that is a time from the request of execution to a start of the execution, as taught by Kanna, in combination with the crypto-accelerator system of Gonuguntla in view of Xu in view of Li, for the purpose of providing a more accurate representation of a job pending time, thereby creating a more efficient job execution system. Regarding claim 3, Gonuguntla in view of Xu in view of Li in view of Kanna teaches the information processing device according to claim 1. Gonuguntla further teaches wherein the executable instructions further cause the processor to: assign the transform of the first secure computation method to a second node when no node falls under the first node, wherein the second node has the first node attribute and has a third node attribute corresponding to a third secure computation method and operates to execute a transform of the third secure computation method, e.g., the encryption and decryption of network traffic between the client device 130 and the one or more servers 120 may be offloaded to the application delivery controller 110, wherein the application delivery controller 110 may include one or more hardware resources, such as one or more cryptographic accelerator chipsets dedicated to the performance of cryptographic operations offloaded to the application delivery controller 110 including a first chipset 115a and a second chipset 115b (Fig. 1, el. 110, 115a, 115b; Para. 40); each of the first chipset 115a and the second chipset 115b may be capable of performing a different quantity of different types of operations per unit of time such as a first quantity of a first type of operations per second and a second quantity of a second type of operations per second, and the first chipset 115 and the second chipset 115b may be different types of accelerator chipsets capable of performing different quantities of the same type of operations per unit of time, wherein the first chipset 115a and the second chipset 115b may be configured to perform a variety of operations, wherein the first chipset 115a and the second chipset 115b may be dedicated to the performance of cryptographic operations such as Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), Data Encryption Standard (DES), Secure Hash Algorithms (SHA), Diffie-Hellman (DH), and/or the like—attributes-- (Para. 43); at 408, the resource controller 150 may determine, based at least on the utilization of the hardware resource, an allocation of hardware resources, wherein the resource controller 150 may allocate more accelerator chipsets when hardware resource utilization at the application delivery controller 110 exceeds a threshold value (e.g., 90%) or remains excessively high for more than a threshold quantity of time (e.g., 90% for more than 10 minutes) (Fig. 4, el. 408; Para. 62); FIG. 1 includes a first chipset 115a and a second chipset 115b but it should be appreciated that the application delivery controller 110 may include a different quantity of chipsets than shown (Para. 40), and …. Gonuguntla does not clearly teach the third secure computation method has a third wait time that is longer than both the second wait time othe first wait time. Xu further teaches the third secure computation method has a third wait time that is longer than both the second wait time othe first wait time, e.g., a policy corresponding to a head node 112 may be used to select one or more of the plurality of job requests 106 for execution, wherein the head node 112 may select at least one job request having a highest priority or a longest waiting time (Fig. 1, el. 112; Para. 23); the administration mechanism 202 may examine a (computer cluster) policy 208 to select one or more job requests to be executed using cloud computing resources, and based on the policy 208, the administration mechanism 202 may select one or more specific cloud computing providers to request resource allocations and assign certain ones of the job requests for execution, wherein the administration mechanism 202 may match a particular job request with a cloud computing provider capable of efficient/expeditious execution, such as the cloud computing provider having a set of resources that meet or surpass the capacities indicated by the job specification data 204 (Fig. 2, el. 202; Para. 34); see figure 1 el. 106 that shows three different jobs (Fig. 1, el. 106). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gonuguntla to include the third secure computation method has a third wait time that is longer than both the second wait time othe first wait time, using the known methods of selecting, by the head node, a job that has the longest wait time, and assigning a task from the selected job to a node for execution, as taught by Xu, in combination with the crypto-accelerator system of Gonuguntla, using the same motivation as in claim 1. Regarding claim 4, Gonuguntla in view of Xu in view of Li in view of Kanna teaches the information processing device according to claim 1, wherein the executable instructions further cause the processor to: compare a first throughput and a second throughput, and the second throughput is determined to be lower than the first throughput; and restore the first node to a state before the transform of the first secure computation method is assigned when the comparison of the first throughput and the second throughput indicates the second throughput is lower than the first throughput, wherein the first throughput is a throughput when the first node operates executes the transform of the second secure computation method, and the second throughput is a throughput when the first node executes the transform of the first secure computation method, e.g., each of the first chipset 115a and the second chipset 115b may be capable of performing a different quantity of different types of operations per unit of time such as a first quantity of a first type of operations per second and a second quantity of a second type of operations per second, and the first chipset 115 and the second chipset 115b may be different types of accelerator chipsets capable of performing different quantities of the same type of operations per unit of time (Gonuguntla-Para. 43); to determine the hardware resource utilization of the application delivery controller 110, the resource controller 150 may apply the matrix M to determine the utilization of each of the first chipset 115a and the second chipset 115b based on the respective traffic pattern at each of the first chipset 115a and the second chipset 115b, wherein utilization at the first chipset 115a for any one point in time t may be determined based on a quantity of each type of operation offloaded to the first chipset 115a and a respective weight of these operations (Gonuguntla-Para. 48); the resource controller 150 may allocate less accelerator chipsets when hardware resource utilization falls below a threshold value (e.g., 50%) or remains low for more than a threshold quantity of time (e.g., 50% for more than 10 minutes) (Gonuguntla-Para. 57); Regarding claim 5, Gonuguntla in view of Xu in view of Li in view of Kanna teaches the information processing device according to claim 3, wherein he executable instructions further cause the processor to: compare a first throughput and a second throughput, and the second throughput is determined to be lower than the first throughput; and restore the second node to a state before the transform of the first secure computation method is assigned when the comparison of the first throughput and the second throughput indicates the second throughput is lower than the first throughput, wherein the first throughput is a throughput when the second node executes the transform of the second secure computation method, and the second throughput is a throughput when the first node executes the transform of the first secure computation method, e.g., each of the first chipset 115a and the second chipset 115b may be capable of performing a different quantity of different types of operations per unit of time such as a first quantity of a first type of operations per second and a second quantity of a second type of operations per second, and the first chipset 115 and the second chipset 115b may be different types of accelerator chipsets capable of performing different quantities of the same type of operations per unit of time (Gonuguntla-Para. 43); to determine the hardware resource utilization of the application delivery controller 110, the resource controller 150 may apply the matrix M to determine the utilization of each of the first chipset 115a and the second chipset 115b based on the respective traffic pattern at each of the first chipset 115a and the second chipset 115b, wherein utilization at the first chipset 115a for any one point in time t may be determined based on a quantity of each type of operation offloaded to the first chipset 115a and a respective weight of these operations (Gonuguntla-Para. 48); the resource controller 150 may allocate less accelerator chipsets when hardware resource utilization falls below a threshold value (e.g., 50%) or remains low for more than a threshold quantity of time (e.g., 50% for more than 10 minutes) (Gonuguntla-Para. 57); Regarding claim 6, Gonuguntla in view of Xu in view of Li in view of Kanna teaches the information processing device according to claim 1. Gonuguntla further teaches wherein the control unit is configured to assign one more nodes of the plurality of nodes, in addition to the first node, to execute the transform of the first secure computation method, e.g., the encryption and decryption of network traffic between the client device 130 and the one or more servers 120 may be offloaded to the application delivery controller 110, wherein the application delivery controller 110 may include one or more hardware resources, such as one or more cryptographic accelerator chipsets dedicated to the performance of cryptographic operations offloaded to the application delivery controller 110 including a first chipset 115a and a second chipset 115b (Fig. 1, el. 110, 115a, 115b; Para. 40); at 408, the resource controller 150 may determine, based at least on the utilization of the hardware resource, an allocation of hardware resources, wherein the resource controller 150 may allocate more accelerator chipsets when hardware resource utilization at the application delivery controller 110 exceeds a threshold value (e.g., 90%) or remains excessively high for more than a threshold quantity of time (e.g., 90% for more than 10 minutes) (Fig. 4, el. 408; Para. 62), when a utilization time of the first secure computation method is longer than a first threshold, e.g., to determine the hardware resource utilization of the application delivery controller 110, the resource controller 150 may apply the matrix M to determine the utilization of each of the first chipset 115a and the second chipset 115b based on the respective traffic pattern at each of the first chipset 115a and the second chipset 115b, wherein utilization at the first chipset 115a for any one point in time t may be determined based on a quantity of each type of operation offloaded to the first chipset 115a and a respective weight of these operations (Para. 48); wherein the resource controller 150 may allocate more accelerator chipsets when hardware resource utilization at the application delivery controller 110 exceeds a threshold value (e.g., 90%) or remains excessively high for more than a threshold quantity of time (e.g., 90% for more than 10 minutes) (Fig. 4, el. 408; Para. 62), and stop assigning the transform of the second method to the first node operating to execute the transform of the second secure computation method, e.g., the encryption and decryption of network traffic between the client device 130 and the one or more servers 120 may be offloaded to the application delivery controller 110, wherein the application delivery controller 110 may include one or more hardware resources, such as one or more cryptographic accelerator chipsets dedicated to the performance of cryptographic operations offloaded to the application delivery controller 110 including a first chipset 115a and a second chipset 115b (Fig. 1, el. 110, 115a, 115b; Para. 40); the resource controller 150 may allocate less accelerator chipsets when hardware resource utilization falls below a threshold value (e.g., 50%) or remains low for more than a threshold quantity of time (e.g., 50% for more than 10 minutes) (Para. 57), when the second utilization time is smaller than the first threshold, e.g., to determine the hardware resource utilization of the application delivery controller 110, the resource controller 150 may apply the matrix M to determine the utilization of each of the first chipset 115a and the second chipset 115b based on the respective traffic pattern at each of the first chipset 115a and the second chipset 115b, wherein utilization at the first chipset 115a for any one point in time t may be determined based on a quantity of each type of operation offloaded to the first chipset 115a and a respective weight of these operations (Para. 48); see Table 1 which indicates a weight for each respective cryptographic operation for each chipset (Para. 47); the resource controller 150 may allocate less accelerator chipsets when hardware resource utilization falls below a threshold value (e.g., 50%) or remains low for more than a threshold quantity of time (e.g., 50% for more than 10 minutes) (Para. 57); wherein the resource controller 150 may allocate more accelerator chipsets when hardware resource utilization at the application delivery controller 110 exceeds a threshold value (e.g., 90%) or remains excessively high for more than a threshold quantity of time (e.g., 90% for more than 10 minutes) (Fig. 4, el. 408; Para. 62). Gonuguntla does not clearly teach when a wait time of the first secure computation method is longer than a first threshold; and when the second wait time is smaller than the first threshold. Xu further teaches when a wait time of the first secure computation method is longer than a first threshold; and when the second wait time is smaller than the first threshold, e.g., the administration mechanism 202 selects each pending job request having a certain priority, wherein the administration mechanism 202 may select each pending job request that does or, alternatively, does not require a certain computing resource, wherein the administration mechanism 202 may identify each pending job request having an execution time (e.g., wait time) that is projected to be equal to or exceed a pre-defined total execution time period (Para. 43). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gonuguntla to include to: wherein the control unit is configured to assign one more nodes of the plurality of nodes, in addition to the first node, to execute the transform of the first secure computation method when a wait time of the first secure computation method is longer than a first threshold; and stop assigning the transform of the second method to the first node operating to execute the transform of the second secure computation method when the second wait time is smaller than the first threshold, using the known methods of selecting, by the head node, a job that has a wait time equal to or exceeding a pre-defined time period, as taught by Xu, in combination with the crypto-accelerator system of Gonuguntla, using the same motivation as in claim 1. Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zangaro et al. (US 2014/0325524 A1)—Zangaro discloses total waiting time may include the time period between when event receiver 200 receives an event and when the job(s) related to the event are fully processed such that a response can be communicated to the client. If the event caused multiple jobs to be created, the total waiting time may include the time to process all related jobs. Total waiting time may include the time a job spends in the system-level queue 202 plus the time required for handling by a processing node. Handling time by a processing node may include time spent in a node-level queue and time required to process the job, e.g., by a CPU of the processing node (Para. 42). Blythe et al. (US 2004/0139433 A1)—Blythe discloses determining which inbound requests should be assigned to which pools (the requests will enter a wait queue for that pool, if necessary), and the number and/or size of thread pools in use may be programmatically tuned as well, and the process tracks requests as they execute; determines the average execution time and wait time per type of request; and dynamically adjusts the number of thread pools and/or the number of threads in the pools (Para. 31). Virtuoso et al. (US 11,128,701 B1)—Virtuoso discloses the service causes the computing nodes to be removed from further executing tasks as part of processing their queries in a first group of nodes in the provider network while other computing nodes in the first group continue to execute tasks as part of processing the queries. The service then adds the first computing node into a second group of nodes in the provider network to execute tasks as part of processing other queries in the provider network (Abstract). Yao et al. (US 2023/0185624 A1)—Yao discloses determining a first mapping between a first set of data parameters and first computing units of a computing network; selecting, based on the first mapping and on first data having a first workload associated therewith, one or more of the first computing units to execute the first workload, and sending for execution the first workload to the one or more of the first computing units; determining a second mapping based on a change in computing units from the first computing units to second computing units, the second mapping between a second set of data parameters and the second computing units; and selecting, based on the second mapping and on second data having a second workload associated therewith, one or more of the second computing units to execute the second workload (Abstract). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY DUFFIELD whose telephone number is (571)270-1643. The examiner can normally be reached Monday - Friday, 7:00 AM - 3:00 PM (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at (571) 272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 15 January 2026 /Jeremy S Duffield/Primary Examiner, Art Unit 2498
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Aug 26, 2025
Non-Final Rejection — §103
Nov 19, 2025
Response Filed
Jan 15, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598067
Method, Device, and System for Updating Anchor Key in a Communication Network for Encrypted Communication with Service Applications
2y 5m to grant Granted Apr 07, 2026
Patent 12591642
SYSTEM FOR STEGANALYSIS DETECTION OF METADATA IN A VIDEO STREAM FOR PROVIDING REAL-TIME DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12579320
SPLIT COUNTERS WITH DYNAMIC EPOCH TRACKING FOR CRYPTOGRAPHIC PROTECTION OF SECURE DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12572685
CONTEXT-BASED PATTERN MATCHING FOR SENSITIVE DATA DETECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12554872
SYSTEM AND METHOD FOR NOTIFYING USERS ABOUT PUBLICLY AVAILABLE DATA
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
99%
With Interview (+53.1%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 438 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month