Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION 2. This action is in response to the original filing on 08/09/2023. Claims 1-20 are pending and have been considered below. Information Disclosure Statement 3. The information disclosure statement (IDS(s)) submitted on 01/19/2024, 03/19/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea without significantly more. S tep 1 , the claims are directed to a process, machine, and manufacture. S tep 2A Prong 1, Claims 1, 19, and 20 recite, in part determining, for a loss function which is a function of a parameter vector comprising a plurality of parameters, values for the parameters for which the parameter vector is a stationary point of the loss function (Mathematical concepts, mathematical relationships) . (a) determining at least one drift value based on a Hessian matrix of the loss function based on second-order partial derivatives of the loss function for current values of the parameters (Mathematical concepts, mathematical calculations) . (b) determining at least one learning rate value by evaluating a learning rate function based on, and having an inverse relationship with, the at least one drift value (Mathematical concepts, mathematical relationships) . (c) determining respective updates to the parameters based upon a product of the at least one learning rate value and a gradient of the loss function with respect to the respective parameter for current values of the parameters (Mathematical concepts, mathematical calculations) . (d) updating the parameters based upon the determined respective updates (Mathematical concepts, mathematical calculations) . Step 2A Prong 2 , this judicial exception is not integrated into a practical application. The additional elements: one or more computers; and one or more storage devices communicatively coupled to the one or more computers, wherein the one or more storage devices store instructions that, when executed by the one or more computers, cause the one or more computers to perform operations (mere instructions to apply the exception using a generic computer component). determining initial values for the parameters; and repeatedly updating the parameters (mere data gathering and recited at a high level of generality, and thus are insignificant extra-solution activity). Step 2B , the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, either alone or in combination. The additional elements: one or more computers; and one or more storage devices communicatively coupled to the one or more computers, wherein the one or more storage devices store instructions that, when executed by the one or more computers, cause the one or more computers to perform operations (mere instructions to apply the exception using a generic computer component). determining initial values for the parameters; and repeatedly updating the parameters (mere data gathering and recited at a high level of generality, and thus are insignificant extra-solution activity). Claims 2-18 provide further limitations to the abstract idea ( Mathematical concepts and/or Mental processes ) as rejected in claim 1, however, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea ( data gathering / insignificant extra-solution activity and/or generic computer component ). Claim Rejections – 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made . 6. Claims 1, 7, 10-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jung et al. (U.S. Patent Application Pub. No. US 20110255741 A1) in view of Zeiler (ADADELTA: An Adaptive Learning Rate Method; arXiv, published 2012, pages 1-6). Claim 1: Jung teaches a computer-implemented method for determining, for a loss function which is a function of a parameter vector comprising a plurality of parameters, values for the parameters for which the parameter vector is a stationary point of the loss function (i.e. trainable parameters are adjusted, and then the forward and reverse propagation is repeated until parameters converge (i.e., the difference between each of the present and previous parameter falls below a predetermined threshold). Backward propagation refers to the process of computing an error (e.g., the squared L2 norm) between the scores obtained by forward propagation and the supervised label for a given ROI 1502, then using that error to change the free parameters of each layer of the network 1500 in reverse order. The update for each trainable parameter in each layer is computed by gradient descent; para. [0084]) , the method comprising: and repeatedly updating the parameters (i.e. Training the convolutional network is a stochastic process, during which a set of labeled ROIs 1602 is shuffled and iterated. Each of the labeled ROIs 1502 is forward propagated and then backward propagated though the network 1500, after which trainable parameters are adjusted, and then the forward and reverse propagation is repeated until parameters converge; para. [0084]) by: (a) determining at least one drift value based on a Hessian matrix of the loss function based on second-order partial derivatives of the loss function for current values of the parameters (i.e. the training process may be accelerated by using a small subset of labeled ROIs to estimate the second derivative with a diagonal Hessian, then adjusting the learning rate for each free parameter to speed training. As used herein, the term Hessian matrix (or simply the Hessian) is the square matrix of second-order partial derivatives of a function; that is, it describes the local curvature of a function of many variables; para. [0085]) ; (b) determining at least one learning rate value by evaluating a learning rate function based on, and having a relationship with, the at least one drift value (i.e. the training process may be accelerated by using a small subset of labeled ROIs to estimate the second derivative with a diagonal Hessian, then adjusting the learning rate for each free parameter to speed training. As used herein, the term Hessian matrix (or simply the Hessian) is the square matrix of second-order partial derivatives of a function; that is, it describes the local curvature of a function of many variables; para. [0085]) ; (c) determining respective updates to the parameters based upon of the at least one learning rate value and a gradient of the loss function with respect to the respective parameter for current values of the parameters (i.e. Backward propagation refers to the process of computing an error (e.g., the squared L2 norm) between the scores obtained by forward propagation and the supervised label for a given ROI 1502, then using that error to change the free parameters of each layer of the network 1500 in reverse order. The update for each trainable parameter in each layer is computed by gradient descent; para. [0084]) ; and (d) updating the parameters based upon the determined respective updates (i.e. Training the convolutional network is a stochastic process, during which a set of labeled ROIs 1602 is shuffled and iterated. Each of the labeled ROIs 1502 is forward propagated and then backward propagated though the network 1500, after which trainable parameters are adjusted, and then the forward and reverse propagation is repeated until parameters converge; para. [0084]) . Jung does not explicitly teach determining initial values for the parameters; an inverse relationship; a product of the at least one learning rate value and a gradient of the loss function. However, Zeiler teaches determining initial values for the parameters (i.e. initial parameter; Section III, page 3) ; (b) determining at least one learning rate value by evaluating a learning rate function based on, and having an inverse relationship with, the at least one drift value (i.e. Once the diagonal of the Hessian is computed, diag(H), the update rule becomes: equation (6) where the absolute value of this diagonal Hessian is used to ensure the negative gradient direction is always followed and µ is a small constant to improve the conditioning of the Hessian for regions of small curvature; Section 2.3, pages 2-3), a step scaling term that is inverse to a Hessian-derived curvature quantity ; (c) determining respective updates to the parameters based upon a product of the at least one learning rate value and a gradient of the loss function with respect to the respective parameter for current values of the parameters (i.e. In this paper we consider gradient descent algorithms which attempt to optimize the objective function by following the steepest descent direction given by the negative of the gradient gt. This general approach can be applied to update any parameters for which a derivative can be obtained: ∆xt =−ηgt (2) where gt is the gradient of the parameters at the t-th iteration and η is a learning rate which controls how large of a step to take in the direction of the negative gradient; Section I Introduction, page 1) ; and (d) updating the parameters based upon the determined respective updates (i.e. update rule becomes: xt+1 = xt +∆xt; Section I Introduction, page 1) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Jung to include the feature of Zeiler. One would have been motivated to make this modification because it improves convergence speed and stability. Claim 7: Jung and Zeiler teach the method of claim 1. Jung further teaches in which there is a respective learning rate value for each parameter (i.e. the training process may be accelerated by using a small subset of labeled ROIs to estimate the second derivative with a diagonal Hessian, then adjusting the learning rate for each free parameter to speed training. As used herein, the term Hessian matrix (or simply the Hessian) is the square matrix of second-order partial derivatives of a function; that is, it describes the local curvature of a function of many variables; para. [0085]) , the update to each parameter being based upon a of the respective learning rate value and the gradient of the loss function with respect to that parameter for the current values of the parameters (i.e. Each of the labeled ROIs 1502 is forward propagated and then backward propagated though the network 1500, after which trainable parameters are adjusted, and then the forward and reverse propagation is repeated until parameters converge (i.e., the difference between each of the present and previous parameter falls below a predetermined threshold). Backward propagation refers to the process of computing an error (e.g., the squared L2 norm) between the scores obtained by forward propagation and the supervised label for a given ROI 1502, then using that error to change the free parameters of each layer of the network 1500 in reverse order. The update for each trainable parameter in each layer is computed by gradient descent; para. [0084]) . Jung does not explicitly teach a product. However, Zeiler further teaches in which there is a respective learning rate value for each parameter, the update to each parameter being based upon a product of the respective learning rate value and the gradient of the loss function with respect to that parameter for the current values of the parameters (i.e. equations 1 and 2, ∆xt =−ηgt where gt is the gradient of the parameters at the t-th iteration ∂f(xt)/∂xt and η is a learning rate which controls how large of a step to take in the direction of the negative gradient; page 1) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Jung to include the feature of Zeiler. One would have been motivated to make this modification because it improves convergence speed and stability. Claim 10: Jung and Zeiler teach the method of claim 1. Jung further teaches in which the parameters comprise neural network parameters defining the functions of a plurality of nodes of a neural network (i.e. the multi-layer convolutional network 1500 may comprise at least one each of a convolution layer 1504, a pooling layer 1506, and a fully connected network layer 1508. The fully connected layer 1508 includes a set of hidden nodes, each of which has a single trainable weight for each input feature; para. [0080-0084]) , the neural network being configured to perform a function on an input data item to generate a corresponding output data item (i.e. forward propagation refers to the process of passing an ROI 1502 through each of the 7 layers, resulting in 2 scores which correspond to two classes: ‘pedestrian’ and ‘non-pedestrian; para. [0084]) , the loss function being indicative of the ability of the neural network to perform a computational task on input data items (i.e. Backward propagation refers to the process of computing an error (e.g., the squared L2 norm) between the scores obtained by forward propagation and the supervised label for a given ROI 1502, then using that error to change the free parameters of each layer of the network 1500 in reverse order. The update for each trainable parameter in each layer is computed by gradient descent; para. [0084]) . Claim 11: Jung and Zeiler teach the method of claim 10. Jung further teaches in which the loss function is based on one or more training data items (i.e. Training the convolutional network is a stochastic process, during which a set of labeled ROIs 1602 is shuffled and iterated. Each of the labeled ROIs 1502 is forward propagated and then backward propagated though the network 1500; para. [0084]) , one or more corresponding target data items associated with the one or more training data items (i.e. Backward propagation refers to the process of computing an error (e.g., the squared L2 norm) between the scores obtained by forward propagation and the supervised label for a given ROI 1502, then using that error to change the free parameters of each layer of the network 1500 in reverse order. The update for each trainable parameter in each layer is computed by gradient descent; para. [0084]) , and one or more corresponding output data items generated by the neural network upon receiving the one or more training data items (i.e. forward propagation refers to the process of passing an ROI 1502 through each of the 7 layers, resulting in 2 scores which correspond to two classes: ‘pedestrian’ and ‘non-pedestrian; para. [0084]) . Claim 12: Jung and Zeiler teach the method of claim 11. Jung further teaches in which the training data items comprise: image data items; video data items; audio data items; sensor data items, encoding the output of at least one sensor describing a state of an environment; or text data items encoding a sample of natural language text (i.e. receiving imagery of a scene from one or more image capturing devices; deriving a depth map and appearance information (i.e., color and intensity) from the imagery; para. [0010, 0011, 0015]) . Claim 13: Jung and Zeiler teach the method of claim 10. Jung further teaches in which the output data item generated by the neural network upon receiving one of the input data items is data indicating that the input data item is in a specified one of a plurality of classes (i.e. forward propagation refers to the process of passing an ROI 1502 through each of the 7 layers, resulting in 2 scores which correspond to two classes: ‘pedestrian’ and ‘non-pedestrian; para. [0084]) . Claim 14: Jung and Zeiler teach the method of claim 10. Jung further teaches in which the output data item is input data for a controller configured to generate control data for controlling an agent to perform an action in an environment (i.e. Portions of a processed video/audio data stream 130 may be stored temporarily in the computer readable medium 128 for later output to an on-board monitor 132, to an on-board automatic collision avoidance system 134, or to a network 136, such as the Internet; para. [0004, 0046]) , the output data item being indicative of the action to be performed by the agent or a selection of a policy from which actions to be performed by the agent are selected (i.e. Portions of a processed video/audio data stream 130 may be stored temporarily in the computer readable medium 128 for later output to an on-board monitor 132, to an on-board automatic collision avoidance system 134, or to a network 136, such as the Internet; para. [0004, 0046]) . Claim 15: Jung and Zeiler teach the method of claim 14. Jung further teaches in which the environment is a real-world environment (i.e. FIG. 1 depicts a vehicle 100 that is equipped with an exemplary digital processing system 110 configured to acquire a plurality of images and detect the presence of one or more pedestrians 102 in a scene 104 in the vicinity of the vehicle 100, according to an embodiment of the present invention; para. [0003, 0004, 0043]) , and the agent is an electromechanical agent configured to operate in the real-world environment based on the control data (i.e. Portions of a processed video/audio data stream 130 may be stored temporarily in the computer readable medium 128 for later output to an on-board monitor 132, to an on-board automatic collision avoidance system 134, or to a network 136, such as the Internet; para. [0004, 0046]) . Claim 16: Jung and Zeiler teach the method of claim 10. Jung further teaches in which the neural network includes a sequence of a plurality of layers (i.e. the multi-layer convolutional network 1500 comprises 7 trainable layers comprising 3 convolutional layers, 2 pooling layers, and 2 fully connected network layers, arranged as shown in FIG. 15B according to the following sequence: convolutional layer 1, pooling layer 1, convolutional layer 2, pooling layer 2, convolutional layer 3, fully connected network layer 1, and fully connected network layer 2; para. [0080]) . Claim 17: Jung and Zeiler teach the method of claim 16. Jung further teaches in which at least one of the layers is a convolutional layer (i.e. the multi-layer convolutional network 1500 comprises 7 trainable layers comprising 3 convolutional layers, 2 pooling layers, and 2 fully connected network layers, arranged as shown in FIG. 15B according to the following sequence: convolutional layer 1, pooling layer 1, convolutional layer 2, pooling layer 2, convolutional layer 3, fully connected network layer 1, and fully connected network layer 2; para. [0080]) . Claims 19-20 are similar in scope to Claim 1 and are rejected under a similar rationale. 7. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Jung in view of Zeiler and further in view of Gur-Ari et al. (Gradient Descent Happens in a Tiny Subspace; arXiv, published 2018, pages 1-19). Claim 2: Jung and Zeiler teach the method of claim 1. Jung does not explicitly teach in which the at least one drift value is determined based on a magnitude of the product of the Hessian matrix and the gradient of the loss function. However, Gur-Ari teaches in which the at least one drift value is determined based on a magnitude of the product of the Hessian matrix and the gradient of the loss function (i.e. the overlap between the gradient g and the Hessian-gradient product Hg during training, defined by overlap(g,Hg) ≡ ≡ gTHg/ ||g|| · ||Hg||; Section 2.2 Hessian-Gradient Overlap, page 3) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Jung and Zeiler to include the feature of Gur-Ari. One would have been motivated to make this modification because it improves stability and step-size control using a curvature gradient. 8. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Jung in view of Zeiler and further in view of Hazan et al. (Beyond Convexity: Stochastic Quasi-Convex Optimization; NeurIPS, published 2015, pages 1-19). Claim 3: Jung and Zeiler teach the method of claim 1. Jung does not explicitly teach in which at least one drift value is determined including a term which increases a magnitude of the drift value when the magnitude of the gradient of the loss function becomes small. Howver, Hazan teaches in which at least one drift value is determined including a term which increases a magnitude of the drift value when the magnitude of the gradient of the loss function becomes small (i.e. Algorithm 1 Normalized Gradient Descent, xt+1 = xt −ηˆgt where gt = ∇ f(xt), ˆgt = gt/||gt||; Section 4, pages 4-5) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Jung and Zeiler to include the feature of Hazan. One would have been motivated to make this modification because it is intuitively clear that to obtain robustness to plateaus (where the gradient can be arbitrarily small) and to exploding gradients (where the gradient can be arbitrarily large), one must ignore the size of the gradient. 9. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Jung in view of Zeiler, Hazan and further in view of Huang et al. (On the Asymptotic Convergence and Acceleration of Gradient Methods; Journal of Scientific Computing, published 2021, pages 1-29). Claim 4: Jung, Zeiler, and Hazan teach the method of claim 3. Jung does not explicitly teach in which the at least one learning rate value is a function of a ratio of the magnitude of the gradient of the loss function and the magnitude of the product of the Hessian matrix and the gradient of the loss function. However, Huang teaches in which the at least one learning rate value is a function of a ratio of the magnitude of the gradient of the loss function and the magnitude of the product of the Hessian matrix and the gradient of the loss function (i.e. the asymptotic optimal gradient (AOPT) method whose stepsize is given by αAOPT = ||gk||/ ||Agk||, page 3) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Jung and Zeiler to include the feature of Huang. One would have been motivated to make this modification because it improves stability and convergence speed through curvature-aware step sizing. 10. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Jung in view of Zeiler and further in view of Schaul et al. (No More Pesky Learning Rates; PMLR, published 2013, pages 1-9). Claim 8: Jung and Zeiler teach the method of claim 7. Jung further teaches in which the learning rate value for each parameter depends on of (i) a Hessian matrix of the loss function based on second- order partial derivatives of the loss function for the current values of the parameters (i.e. the training process may be accelerated by using a small subset of labeled ROIs to estimate the second derivative with a diagonal Hessian, then adjusting the learning rate for each free parameter to speed training. As used herein, the term Hessian matrix (or simply the Hessian) is the square matrix of second-order partial derivatives of a function; that is, it describes the local curvature of a function of many variables; para. [0085]) , and (ii) the gradient of the loss function with respect to the parameters (i.e. trainable parameters are adjusted, and then the forward and reverse propagation is repeated until parameters converge (i.e., the difference between each of the present and previous parameter falls below a predetermined threshold). Backward propagation refers to the process of computing an error (e.g., the squared L2 norm) between the scores obtained by forward propagation and the supervised label for a given ROI 1502, then using that error to change the free parameters of each layer of the network 1500 in reverse order. The update for each trainable parameter in each layer is computed by gradient descent; para. [0084]) . However, Zeiler further teaches in which the learning rate value for each parameter depends on of the product of (i) a Hessian matrix of the loss function based on second-order partial derivatives of the loss function for the current values of the parameters, and (ii) the gradient of the loss function with respect to the parameters (i.e. equations 1 and 2, ∆xt =−ηgt where gt is the gradient of the parameters at the t-th iteration ∂f(xt)/∂xt and η is a learning rate which controls how large of a step to take in the direction of the negative gradient; page 1) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Jung to include the feature of Zeiler. One would have been motivated to make this modification because it improves convergence speed and stability. Jung does not explicitly teach he learning rate value for each parameter depends on a respective component of the product. However, Schaul teaches in which the learning rate value for each parameter depends on a respective component of the product of (i) a Hessian matrix of the loss function based on second-order partial derivatives of the loss function for the current values of the parameters, and (ii) the gradient of the loss function with respect to the parameters (i.e. Algorithm1 Stochastic gradient descent with adaptive learning rates, computing per-parameter diagonal Hessian estimates and gradient component information; page 4) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Jung and Zeiler to include the feature of Schaul. One would have been motivated to make this modification because it improves stability and convergence speed while reducing sensitivity to manual learning rate tuning. 11. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Jung in view of Zeiler and further in view of Sutskever et al. (On the importance of initialization and momentum in deep learning; PMLR, published 2013, pages 1-9). Claim 9: Jung and Zeiler teach the method of claim 1. Jung does not explicitly teach in which the respective updates to the parameters are a sum of a respective momentum term and a term based on the product of the at least one learning rate and a gradient of the loss function with respect to the respective parameter for the current values of the parameters, the respective momentum terms being updated in each iteration. However, Sutskever teaches in which the respective updates to the parameters are a sum of a respective momentum term and a term based on the product of the at least one learning rate and a gradient of the loss function with respect to the respective parameter for the current values of the parameters, the respective momentum terms being updated in each iteration (i.e. classical momentum (CM), is a technique for accelerating gradient descent that accumulates a velocity vector in directions of persistent reduction in the ob jective across iterations. Given an objective function f( 0 ) to be minimized, classical momentum is given by: equation 1 is vt+1 = µvt "rf( 0 t), equation 2 is 0 t+1 = 0 t+vt+1; Section 2, pages 2-4) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Jung and Zeiler to include the feature of Sutskever. One would have been motivated to make this modification because it improves stability and convergence speed in gradient-based training. 12. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Jung in view of Zeiler and further in view of Wang et al. (U.S. Patent Application Pub. No. US 20210064013 A1). Claim 18: Jung and Zeiler teach the method of claim 10. Jung further teaches wherein the neural network is based upon a ResNet architecture. However Wang teaches wherein the neural network is based upon a ResNet architecture (i.e. FIG. 11B shows a schematic of a deep residual network (ResNet) 1150 used by some embodiments. The ResNet 1150 uses so-called residual block to skip one or more layers in the deep learning architecture for robust performance even with very deep network models, to learn the classification vector; para. [0077]) . Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Jung and Zeiler to include the feature of Wang. One would have been motivated to make this modification because it improves in training and accuracy for deep CNNs. Allowable Subject Matter Claims 5 and 6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the 35 USC §101 for being directed to an abstract idea is successfully addressed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Kim et al. (Pub. No. US 20210015375 A1), t he artificial intelligence algorithm 1100 according to an embodiment of the present disclosure may include three consecutive convolutional layers 1101, four ResNet blocks 1102, and one fully connected layer 1103. Each of the ResNet blocks may include three convolutional layers, and a skip connection that directly connects the input of the ResNet block to the output thereof may be used to perform deeper learning. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck , 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson , 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/ Primary Examiner, Art Unit 2141