Prosecution Insights
Last updated: April 19, 2026
Application No. 17/442,347

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM

Final Rejection §101§103§112
Filed
Sep 23, 2021
Examiner
ZECHER, CORDELIA P K
Art Unit
2100
Tech Center
2100 — Computer Architecture & Software
Assignee
National Institute Of Advanced Industrial Science And Technology
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
253 granted / 509 resolved
-5.3% vs TC avg
Strong +26% interview lift
Without
With
+25.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
287 currently pending
Career history
796
Total Applications
across all art units

Statute-Specific Performance

§101
19.0%
-21.0% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 509 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Amendment Applicant’s submission filed 2025-08-25 has been entered. Applicant’s amendment to the Specification and Claims have overcome each and every objection, 112(f) interpretation, and 112(b) rejection that was previously set forth in the most recent office action. However, new rejections under 112(a) are applied. The claim status is as follows: Claims 1-5 remain pending in the application. Claims 1-5 are amended. Claims 6-7 are new. Response to Arguments Applicant's arguments filed in response to rejections under 35 USC 101 have been fully considered but they are not persuasive. Applicant argues on Remarks Page 8 that “claims 1 and 4-5 are similar to Example 47”, but makes no further argument outside of this conclusory statement. Examiner respectfully disagrees, and points out that in Example 47, Claim 1 is eligible because it does not recite any abstract ideas. However, Claims 1 and 4-5 of the instant application do indeed recite abstract ideas (i.e., “calculates”), and is therefore analogous to Example 47 Claim 2, which is shown in the example to be ineligible, as it recites an abstract idea (“detecting”). Therefore, the rejections under 35 USC 101 are maintained, as Examiner notes that the amendments merely recite generic computer hardware on which to perform the recited abstract ideas. Applicant's arguments filed in response to rejections under 35 USC 102 have been fully considered, but are moot in light of the fact that Applicant’s amendments to the claim have necessitated a change in the art applied. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 4, and 5 recite the limitations “a plurality of neurons organized in an array, wherein each neuron includes a register, a microprocessor, and at least one input; and a plurality of synaptic circuits, each synaptic circuit including a memory for storing a weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits”. There is no mention whatsoever in the Specification of any of the following terms: “array”, “register”, “microprocessor”, “synaptic circuit”. The only mention of anything resembling a “circuit” or “microprocessor” in the Specification is in [0018]: “For example, the information processing device 10 may execute the processing of the piecewise linear network 20 by hardware, such as by configuring the piecewise linear network 20 using an ASIC (Application Specific Integrated Circuit).” In summary, there is nothing in the Specification regarding hardware implementation details other than the generic recitation of an ASIC in Specification [0018]. Besides an ASIC, all other claimed hardware implementation details fail to meet the written description requirement, and are therefore rejected as new subject matter. Claim 1 recites the limitation “an output node that stores, in a recording medium, an output value”. There is no support for this in the Specification, which only states that the output node “outputs an output value” (Spec [0113]), but never that the output node “stores, in a recording medium, an output value.” The only connection between the “output node” and memory is in Spec [0121] which states: “the output node 303 is stored in the auxiliary storage device 730 in the form of a program.” This states that the output node itself is stored as a program, and not that the output node stores the “output value” in a recording medium. Claims 2-3 and 6-7 are rejected because they inherit the deficiencies of their parent claims. Claim Rejections - 35 USC § 101 Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-3 and 6-7 are directed to an ASIC, Claim 4 is directed to a method, and Claim 5 is directed to a non-transitory recording medium, and therefore all the claims are directed to one of the four statutory categories of patent eligible subject matter. Step 2A Prong 1: Claims 1, 4, and 5 recite: “calculating a plurality of linear combination node values and first weight coefficients in which input values are linearly combined”; this is a mathematical calculation “calculating, with respect to the linear combination node value and second weight coefficients different from the first weight coefficients, a selection node value indicating whether or not the linear combination node value is selected”; this is a mathematical calculation “calculating an output value based on the linear combination node value and the selection node value”; this is a mathematical calculation Step 2A Prong 2: This judicial exception is not integrated into a practical application because the additional elements are as follows: “An application specific integrated circuit (ASIC) for an artificial neural network (ANN), the ASIC comprising: a plurality of neurons organized in an array, wherein each neuron includes a register, a microprocessor, and at least one input; and a plurality of synaptic circuits, each synaptic circuit including a memory for storing a weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits, wherein the plurality of neurons comprises”; “using an artificial neural network including a plurality of neurons organized in an array, wherein each neuron includes a register, a microprocessor, and at least one input; and a plurality of input circuits, each synaptic circuit including a memory for storing a weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits, the information processing method comprising”; “a non-transitory recording medium that stores a program having an information processing method using an artificial neural network including a plurality of neurons organized in an array, wherein each neuron includes a register, a microprocessor, and at least one input; and a plurality of synaptic circuits, each synaptic circuit including a memory for storing a weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits, the information processing method comprising”; these limitations amount to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f), because they merely recite generic computer hardware components (“register”, “microprocessor”, “memory”) and generic neural networks (“artificial neural network”, “neuron”, “synaptic circuit”). (Claim 1): “stores, in a recording medium, an output value”; this amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f), as it merely recites using a generic recording medium to store data. Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “(Claim 1) An application specific integrated circuit (ASIC) for an artificial neural network (ANN), the ASIC comprising: a plurality of neurons organized in an array, wherein each neuron includes a register, a microprocessor, and at least one input; and a plurality of synaptic circuits, each synaptic circuit including a memory for storing a weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits, wherein the plurality of neurons comprises”; (Claim 4) “using an artificial neural network including a plurality of neurons organized in an array, wherein each neuron includes a register, a microprocessor, and at least one input; and a plurality of input circuits, each synaptic circuit including a memory for storing a weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits, the information processing method comprising”; (Claim 5) “a non-transitory recording medium that stores a program having an information processing method using an artificial neural network including a plurality of neurons organized in an array, wherein each neuron includes a register, a microprocessor, and at least one input; and a plurality of synaptic circuits, each synaptic circuit including a memory for storing a weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits, the information processing method comprising”; these limitations amount to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f), because they merely recite generic computer hardware components (“register”, “microprocessor”, “memory”) and generic neural networks (“artificial neural network”, “neuron”, “synaptic circuit”). (Claim 1): “stores, in a recording medium, an output value”; this amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f), as it merely recites using a generic recording medium to store data. Dependent Claims Claim 2 recites: “wherein a total value obtained by summing the value of the selection node for all selection nodes is a constant value”; this is a mathematical calculation under Step 2A Prong 1 “increases a maximum value among the values of the selection node for the all selection nodes”; increasing a maximum value is a mathematical calculation under Step 2A Prong1 “wherein, in a machine learning phase, machine learning is performed that”; machine learning, broadly recited at a high level of generality, amounts to nothing more than an instruction to apply the abstract idea using a generic computer under Step 2A Prong 2 and 2B Claim 3 recites: “sets whether a combination of the linear combination node and the selection node is used or not used”; setting whether something is or is not used as an evaluation or judgment that can be carried out by a human in the mind or with pen and paper, and is therefore a mental process under Step 2A Prong 1 “a binary mask node that”; this amounts to nothing more than an instruction to apply the abstract idea using a generic computer as per MPEP 2106.05(f) under Steps 2A Prong 2 and 2B Claim 6 recites: “determine an operation for a control target based on the output value”; determining is a mental process “control the control target according to the determined operation”; this amounts to mere instructions to apply an exception as per MPEP 2106.05(f) (“(1) Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished”) under Steps 2A Prong 2 and 2B Claim 7 recites: “wherein the control target includes at least one of a valve in a chemical plant, a construction site, an automobile production plant, a precision parts manufacturing plant, or a robot”; this amounts to merely indicating a field of use or technological environment in which to apply a judicial exception as per 2106.05(h) under Steps 2A Prong 2 and 2B Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 3-7 are rejected under 35 U.S.C. 103 as being unpatentable over Rajah et al. (“ASIC design of a Kohonen neural network microchip”; hereinafter “Rajah”) in view of Dolezel et al. (“Computationally simple neural network approach to determine piecewise-linear dynamical model”; hereinafter “Dolezel”), and further in view of Jacobs et al. (“Learning Piecewise Control Strategies in a Modular Neural Network Architecture”; hereinafter “Jacobs”) As per Claim 1, Rajah teaches an application specific integrated circuit (ASIC) for an artificial neural network (ANN) (Rajah, Page 148 Abstract: “The ASIC design of the KNN processor adopts a novel implementation approach whereby the computation of the KNN algorithm is performed on the custom ASIC microchip and its operations are governed by a FPGA based controller.”) the ASIC comprising: a plurality of neurons organized in an array (Rajah, Page 150: “IV. The Structure of the Neuron-Array Computation Engine (ACE)” and Fig. 3: “The Scalable ACE Design Consisting Arrays Of PEs.”) wherein each neuron includes a register, a microprocessor, and at least one input (Rajah, Page 148 Intro: “The Neuron-Array Computation Engine (ACE) of the KARN processor is made up of scalable arrays of Processing Elements (PE).” Examiner notes that the “PE”s are shown in Figure 1 as being components of a “microchip”, and are therefore each “microprocessors”. Figure 1 also shows them having inputs, as well as outputs. Rajah, Page 149 Figure 2 discloses a “Transfer Register”.) and a plurality of synaptic circuits, each synaptic circuit including a memory for storing a weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits (Rajah, Page 149 Figure 2, discloses a “Synapse RAM”, which is a memory for a synaptic circuit. A synapse is a connection between neurons, and the synapse stores a weight for that connection, as Figure 2 shows that the Synapse RAM is used to calculate a weight update (x – w)). output node that stores, in a recording medium, an output value (Rajah, Page 149 Fig. 2 discloses a “Transfer Register” that stores the calculated weight difference.) However, Rajah does not teach wherein the plurality of neurons comprises: a plurality of linear combination nodes that linearly combine input values and first weight coefficients; a selection node that is provided to the linear combination node and calculates, according to the input values and second weight coefficients different from the first weight coefficients, a value indicating whether or not a corresponding linear combination node is selected; and an output node that stores, in a recording medium, an output value calculated based on a value of the linear combination node and a value of the selection node. Dolezel teaches wherein the plurality of neurons comprises: a plurality of linear combination nodes that linearly combine input values and first weight coefficients (Dolezel, Page 359 Figure 9, discloses: PNG media_image1.png 516 542 media_image1.png Greyscale Above, the “Σ”, or “weighted sum function” nodes, are nodes that perform linear combinations that combine input values and first weight coefficients.) a selection node that is provided to the linear combination node and calculates, according to the input values and second weight coefficients [different from the first weight coefficients], a value indicating whether or not a corresponding linear combination node is selected (Dolezel, Page 358: “First, it is necessary to define saturation vector v of S elements. This vector indicates saturation states of the hidden neurons … In other words, the value of vj indicates one of three different states of neuron j of hidden layer.” Dolezel, Page 359: “Thus, weights of these neurons should be inactivated from the sum (14). This requirement could be efficiently performed by multiplicating each sum by the term (1 - |vj|) as shown above.” Thus, element vj of the “saturation vector” is a value indicating whether or not a corresponding linear combination node is selected (“inactivated”)). and an output node that [stores, in a recording medium,] an output value calculated based on a value of the linear combination node and a value of the selection node (Dolezel, Figure 9 shown above, discloses an “Output Layer” comprising a “Linear (identic) activation function” that calculates an output value based on the linear combination nodes and selection nodes.) Dolezel is analogous art because it is in the field of endeavor of machine learning. It would have been obvious before the effective filing date of the claimed invention to combine the ASIC neural network of Rajah with the piecewise linear neural network of Dolezel. One of ordinary skill in the art would have been motivated to do so because the piecewise linear neural network technique is particularly useful in an industrial environment (Dolezel Abstract: “The mentioned technique is encouraged to be used especially in process control for controllers tuning. The issue is, that hardware used in industrial applications is reliable and robust, but rather simple and with low performance. Thus, the aim of this paper is to show, whether the proposed technique is capable to be applied in industrial environment.”) However, the combination of Rajah and Dolezel does not teach a selection node that is provided to the linear combination node and calculates, according to the input values and second weight coefficients different from the first weight coefficients, a value indicating whether or not a corresponding linear combination node is selected Jacobs teaches a selection node that is provided to the linear combination node and calculates, according to the input values and second weight coefficients different from the first weight coefficients (Recall above that Dolezel teaches the binary decision of “whether or not a corresponding linear combination node is selected.” Jacobs, Page 338 Fig. 2 and Section III Para 2, discloses a weighted combination: PNG media_image2.png 170 364 media_image2.png Greyscale “The architecture, which is illustrated in Fig. 2, consists of two types of networks: expert networks and a gating network. The expert networks compete to learn the training patterns and the gating network mediates this competition. The expert networks and the gating network are layered or recurrent neural networks with arbitrary connectivity. The gating network is restricted to have as many output units as there are expert networks, and the activations of these output units must be nonnegative and sum to one.” Here, Jacobs discloses a selection node (“gating network”) that is based on a completely different set of weights (g1 and g2) than those in each of the “Expert Networks”. Jacobs is analogous art because it is in the field of endeavor of piecewise strategies for neural networks. It would have been obvious before the effective filing date of the claimed invention to combine the piecewise linear neural network of Dolezel with the separate “gating network” of Jacobs. The combination would result in a system in which the piecewise linear neural network of Dolezel would select the linear combination nodes by using a completely separate set of weights. One of ordinary skill in the art would have been motivated to do so in order to achieve better performance in industrial control tasks and to be able to adapt to changes and diverse environment parameters over time (Jacobs, Abstract: “Simulations show that the modular architecture’s performance is superior to that of a single network on a multipayload robot motion control task” and Conclusion: “To summarize, the dynamics of a plant frequently change with its operating conditions. Design methodologies, such as gain scheduling, allow a designer to construct different control laws for different regions of a plant’s parameter space. When a controller is learned instead of designed, analogous issues arise. Temporal crosstalk, which retards learning, is particularly salient in control problems. Because controlled dynamical systems tend to move relatively slowly through state space, learning controllers receive training data from a local region for long periods of time. Temporal crosstalk is therefore inevitable if the dynamics of the plant vary at different operating points. An advantage of modular neural network architectures is that they are able to partition a plant’s parameter space into a number of regions and can allocate different networks to learn a control law for each region. As a result, they are relatively robust to temporal crosstalk.”) As per Claim 3, the combination of Rajah, Dolezel, and Jacobs teaches the ASIC according to claim 1. Dolezel teaches a binary mask node that sets whether a combination of the linear combination node and the selection node is used or not used. (Dolezel, Page 358, discloses that the selection node vj has “three different states” of 1, 0, and -1. Dolezel, Page 359, discloses performing a calculation of |1-vj| to convert this to a binary value, thus becoming a “binary mask” of vj: “Moreover, notice that the term (|1-vj|) is positive (|1-vj| = 1) for linear state of the hidden neuron j, and neutral ((|1-vj|) = 0) for the saturated state of the hidden neuron.” Furthermore, Dolezel, Page 360 states: “Thus, weights of these neurons should be inactivated from the sum (14). This requirement could be efficiently performed by multiplicating each sum by the term (|1-vj|) as shown above.”) As per Claim 4, this is a method claim corresponding to ASIC claim 1, and is rejected for similar reasons. As per Claim 5, this is a non-transitory recording medium claim corresponding to ASIC claim 1, and is rejected for similar reasons. As per Claim 6, the combination of Rajah, Dolezel, and Jacobs teaches the ASIC according to claim 1. Jacobs teaches a processor further configured to determine an operation for a control target based on the output value; and control the control target according to the determine operation (Jacobs, Page 339 Section IV: “We have trained several neural network systems to serve as feedforward controllers for a simulated robot arm when a variety of payloads, each of a different mass, must be moved along a specified trajectory … Before describing the neural network systems, we detail the training procedure used to provide error information to the systems. The procedure involves training an adaptive feedforward controller to control a robot arm in conjunction with a fixed feedback controller. ) It would have been obvious to one of ordinary skill in the art to combine the teachings of Jacobs with Rajah and Dolezel for at least the reasons recited in the rejection to Claim 1. As per Claim 7, the combination of Rajah, Dolezel, and Jacobs teaches the ASIC according to claim 6. Jacobs teaches wherein the control target includes at least one of a valve in a chemical plant, a construction site, an automobile production plant, a precision parts manufacturing plant, or a robot (Jacobs, Page 339 Section IV: “We have trained several neural network systems to serve as feedforward controllers for a simulated robot arm when a variety of payloads, each of a different mass, must be moved along a specified trajectory … Before describing the neural network systems, we detail the training procedure used to provide error information to the systems. The procedure involves training an adaptive feedforward controller to control a robot arm in conjunction with a fixed feedback controller. ) It would have been obvious to one of ordinary skill in the art to combine the teachings of Jacobs with Rajah and Dolezel for at least the reasons recited in the rejection to Claim 1. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rajah, Dolezel, and Jacobs, further in view of Choi et al. (“Learning to Compose Task-Specific Tree Structures”, hereinafter “Choi”). As per Claim 2, the combination of Rajah, Dolezel, and Jacobs teaches the ASIC according to claim 1 as well as selection nodes (see Dolezel in Rejection to Claim 1). However, the combination does not explicitly teach wherein a total value obtained by summing the value of the selection node for all selection nodes is a constant value; and wherein, in a machine learning phase, machine learning is performed that increases a maximum value among the values of the selection nodes for the all selection nodes. Choi teaches wherein a total value obtained by summing the value of the selection node for all selection nodes is a constant value; and wherein, in a machine learning phase, machine learning is performed that increases a maximum value among the values of the selection nodes for the all selection nodes. (Recall above that Dolezel teaches selection nodes. Choi, Page End of 5095-5096, discloses: “Gumbel-Softmax distribution is motivated by Gumbel-Max trick (Maddison, Tarlow, and Minka 2014), an algorithm for sampling from a categorical distribution … In Gumbel-Softmax, the discontinuous argmax function of Gumbel-Max trick is replaced by the differentiable softmax function. That is, given unnormalized probabilities π1, · · · , πk, a sample y = (y1, · · ·, yk) from the Gumbel-Softmax distribution is drawn by PNG media_image3.png 44 296 media_image3.png Greyscale where τ is a temperature parameter; as τ diminishes to zero, a sample from the Gumbel-Softmax distribution becomes cold and resembles the one-hot sample. Straight-Through (ST) Gumbel-Softmax estimator (Jang, Gu, and Poole 2017) … is a discrete version of the continuous Gumbel-Softmax estimator … In the forward pass, it discretizes a continuous probability vector y sampled from the Gumbel-Softmax distribution into the one-hot vector yST = (y1ST , · · · , ykST), where PNG media_image4.png 40 284 media_image4.png Greyscale And in the backward pass it simply uses the continuous y, thus the error signal is still able to backpropagate.”) Examiner notes that here, Choi discloses a Gumbel-Softmax, in which all elements sum to 1, and therefore summing the value of all values is a constant value. Furthermore, Choi discloses machine learning with training (forward and backward pass) in which the Gumbel-Softmax distribution is learned from the continuous values that sum to 1, to become “argmax” which is a one-hot vector of a single 1 and zeros, and therefore the maximum value of the vector of values increases to 1 during the machine learning process.) Choi is analogous art because it is in the field of endeavor of machine learning. It would have been obvious before the effective filing date of the claimed invention to combine the piecewise linear neural network with piecewise linear activations of Dolezel and the piecewise gating network of Jacobs, with the Gumbel-Softmax training process of Choi. The combination would result in a system such that: - The piecewise linear neural network of Dolezel would select the linear combination nodes by using a completely separate set of weights as described in Jacobs - Jacobs’ gating network uses “softmax” in which values sum to 1 as stated in Page 338: “The gating network is restricted to have as many output units as there are expert networks, and the activations of these output units must be nonnegative and sum to one. To meet these constraints, we use the “softmax” activation function [9]; specifically, the activation of the ith output unit of the gating network, denoted gi, is …” - Jacobs’ weights for the softmax are concurrently trained as stated in Pages 338-339: “During training, the weights of the expert and gating networks are adjusted simultaneously” - Choi’s Gumbel-Softmax is employed in which Jacobs’ softmax converges to a one-hot vector, thus converging with Dolezel’s teaching of the claimed limitation of “a value indicating whether or not a corresponding linear combination node is selected”. Thus, the combination of Rajah, Dolezel, Jacobs, and Choi teaches the claimed limitations. One of ordinary skill in the art would have been motivated to do make this combination with Choi because Dolezel is directed to altering a model’s computation path by selecting the segments of the piecewise activation, and Choi discloses that their Gumbel-Softmax trick is “differentiable” (as opposed to pure argmax one-hot vectors), and therefore able to be trained (and thus compatible with Jacobs), which is useful for this purpose (Choi, Page 5096: “In Gumbel-Softmax, the discontinuous argmax function of Gumbel-Max trick is replaced by the differentiable softmax function … ST Gumbel-Softmax estimator is useful when a model needs to utilize discrete values directly, for example in the case that a model alters its computation path based on samples drawn from a categorical distribution.”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEONARD A SIEGER whose telephone number is (571)272-9710. The examiner can normally be reached M-F 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEONARD A SIEGER/Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Sep 23, 2021
Application Filed
May 23, 2025
Non-Final Rejection — §101, §103, §112
Aug 25, 2025
Response Filed
Oct 08, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583466
VEHICLE CONTROL MODULES INCLUDING CONTAINERIZED ORCHESTRATION AND RESOURCE MANAGEMENT FOR MIXED CRITICALITY SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12578751
DATA PROCESSING CIRCUITRY AND METHOD, AND SEMICONDUCTOR MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12561162
AUTOMATED INFORMATION TECHNOLOGY INFRASTRUCTURE MANAGEMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12536291
PLATFORM BOOT PATH FAULT DETECTION ISOLATION AND REMEDIATION PROTOCOL
2y 5m to grant Granted Jan 27, 2026
Patent 12393641
METHODS FOR UTILIZING SOLVER HARDWARE FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
76%
With Interview (+25.8%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 509 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month