Prosecution Insights
Last updated: April 19, 2026
Application No. 18/652,473

OPERATING LAW AWARE PLANNING CRITERIA FOR INTELLIGENT MACHINES AND NEURAL MOTION PLANNERS INTEGRATED WITH THE PLANNING CRITERIA

Non-Final OA §101§102§103
Filed
May 01, 2024
Examiner
TC 3600, DOCKET
Art Unit
3600
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NVIDIA Corporation
OA Round
1 (Non-Final)
4%
Grant Probability
At Risk
1-2
OA Rounds
1y 1m
To Grant
5%
With Interview

Examiner Intelligence

Grants only 4% of cases
4%
Career Allow Rate
5 granted / 142 resolved
-48.5% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 1m
Avg Prosecution
206 currently pending
Career history
348
Total Applications
across all art units

Statute-Specific Performance

§101
36.1%
-3.9% vs TC avg
§103
34.6%
-5.4% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 142 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Claims 1-30 are currently pending and have been examined in this application. This communication is the first action on the merits. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 Claims 1-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are either directed to a method or an apparatus, which is one of the statutory categories of invention. (Step 1: YES) The examiner has identified claim 1, which substantially includes all the limitations of claims 12, 27 and 29, as the claim that represents the claimed invention for analysis. The independent claim 1 recites the following limitations (bolded text corresponds to the abstract idea): A method of operating an autonomous vehicle (AV), comprising: scalably expressing traffic laws and additional planning criteria in a universal planning criteria (UPC) framework; and generating, using a neural motion planner and the UPC framework, a planned trajectory for the AV. Under its broadest reasonable interpretations, this method is generating a planned trajectory for an autonomous vehicle using a neural motion planner. If the broadest reasonable interpretation of a claim limitations entails performance in the human mind, then it falls within the mental processes grouping of abstract ideas. Therefore, the claim recites an abstract idea. (Step 2A-Prong 1: Yes. The claims are abstract.) This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). In particular, the claims recite additional elements of scalably expressing traffic laws and additional planning criteria in a universal planning criteria (UPC) framework. The step of scalably expressing traffic laws and additional planning criteria in a universal planning criteria (UPC) framework is recited at a high level of generality and do not comprise any of the above additional elements that individually or in combination, have integrated the judicial exception into a practical application. Specifically, the step of scalably expressing traffic laws and additional planning criteria in a universal planning criteria (UPC) framework constitutes mere data gathering and is insignificant extra-solution activity. There are no additional elements that apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. (Step 2A-Prong 2: No. The additional claimed elements are not integrated into a practical application.) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an "inventive concept") to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use. The additional elements claimed amount to insignificant extra-solution activities. See 2106.05(g) for more details. Generally linking the use of the judicial exception to a particular technological environment or field of use, cannot provide an inventive concept-rendering the claim patent ineligible. Thus claim 1 and similarly other independent claims are not patent eligible. (Step 2B: NO. The claims do not provide significantly more) The dependent claims further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea. Thus, the aforementioned claims are not patent-eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 10-13, 21-26 and 29-30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zeng (US20200159225A1). Claim 1. Zeng teaches the following limitations: A method of operating an autonomous vehicle (AV), comprising: scalably expressing traffic laws and additional planning criteria in a universal planning criteria (UPC) framework; (Zeng – [0088] Access to a map can enable accurate motion planning, such as by permitting the autonomous vehicle to drive according to traffic rules (e.g., stop at a red light, follow the lane, change lanes only when allowed). Towards this goal, the backbone network can exploit high-definition maps that contain information about the semantics of the scene, such as lane location, the boundary type (e.g., solid, dashed) and the location of stop signs or other signs. In some examples, the map can be rasterized to form an M-channel tensor, where each channel represents a different map element.; [0092] A final convolution layer can be applied with a filter number T, which corresponds to planning horizon. Each filter can generate a cost volume ct for a future time step t. This allows the machine-learned motion planning model 202 to evaluate the cost of any trajectory s by simply indexing in the cost volume c.) and generating, using a neural motion planner and the UPC framework, a planned trajectory for the AV. (Zeng – [0031] The minimization can be approximated by sampling a set of physically valid trajectories, and picking the trajectory having the minimum cost using a cost volume. The cost volume can be a learned cost volume generated by a convolutional neural network backbone. The convolutional neural network can extract features from both the LIDAR data and the map data to generate a feature map; [0064] The motion planning system 160 can be configured to continuously update the vehicle's motion plan 162 and a corresponding planned vehicle motion trajectory) Claim 10. Zeng teaches the following limitations: The method as recited in Claim 1, wherein the planned trajectory is a first planned trajectory and the generating further includes generating a second planned trajectory using a classical motion planner, fusing the first and second planned trajectories, and providing a third planned trajectory based on the fusing. (Zeng – [0030] The trajectory generator can index the cost of each potential trajectory from different filters of the cost volume and sum them together to generate a trajectory score in some examples. The trajectory generator can select the trajectory with the minimum cost for final motion planning in some examples. In another example, the trajectory generator can optimize a single sampled trajectory using the cost volume. For example, the trajectory generator can include an optimizer that optimizes a sampled trajectory by minimizing the cost computed for the trajectory using the cost volume.) Claim 11. Zeng teaches the following limitations: The method as recited in Claim 1, further comprising directing operation of the AV using the planned trajectory. (Zeng - [0049] The vehicle 102 can be configured to operate in a plurality of operating modes. For example, the vehicle 102 can be configured to operate in a fully autonomous (e.g., self-driving) operating mode in which the vehicle 102 is controllable without user input (e.g., can drive and navigate with no input from a vehicle operator present in the vehicle 102 and/or remote from the vehicle 102) Claim 12. Zeng teaches the following limitations: A method of operating a machine, comprising: representing operating laws in motion planning for the machine by scalably expressing the operating laws and other planning criteria in a UPC framework (Zeng – [0088] Access to a map can enable accurate motion planning, such as by permitting the autonomous vehicle to drive according to traffic rules (e.g., stop at a red light, follow the lane, change lanes only when allowed). Towards this goal, the backbone network can exploit high-definition maps that contain information about the semantics of the scene, such as lane location, the boundary type (e.g., solid, dashed) and the location of stop signs or other signs. In some examples, the map can be rasterized to form an M-channel tensor, where each channel represents a different map element.; [0092] A final convolution layer can be applied with a filter number T, which corresponds to planning horizon. Each filter can generate a cost volume ct for a future time step t. This allows the machine-learned motion planning model 202 to evaluate the cost of any trajectory s by simply indexing in the cost volume c.) and embedding the UPC in a neural motion planner; (Zeng – [0097] In some implementations, the backbone network can generate a feature map based on the sensor data and the map data and provide the feature map as input to one or more convolutional neural networks configured to generate the intermediate representations and the cost volume(s).) generating planned trajectories by the neural motion planner using the UPC; and operating the machine using the planned trajectories. (Zeng – [0031] The minimization can be approximated by sampling a set of physically valid trajectories, and picking the trajectory having the minimum cost using a cost volume. The cost volume can be a learned cost volume generated by a convolutional neural network backbone. The convolutional neural network can extract features from both the LIDAR data and the map data to generate a feature map; [0064] The motion planning system 160 can be configured to continuously update the vehicle's motion plan 162 and a corresponding planned vehicle motion trajectory) Claim 13. Zeng teaches the following limitations: The method as recited in Claim 12, wherein the operating laws are traffic laws and the machine is an autonomous vehicle. (Zeng – [0088] Access to a map can enable accurate motion planning, such as by permitting the autonomous vehicle to drive according to traffic rules (e.g., stop at a red light, follow the lane, change lanes only when allowed). Towards this goal, the backbone network can exploit high-definition maps that contain information about the semantics of the scene, such as lane location, the boundary type (e.g., solid, dashed) and the location of stop signs or other signs. In some examples, the map can be rasterized to form an M-channel tensor, where each channel represents a different map element.; [Abstract] Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided.) Claim 21. Zeng teaches the following limitations: The method as recited in Claim 12, wherein the machine is a robot. (Zeng – [0068] Likewise, a smart phone with one or more cameras, a robot, augmented reality system, and/or another type of system can utilize aspects of the present disclosure to generate target trajectories) Claim 22. Zeng teaches the following limitations: A control system for a machine, comprising: (Zeng – [0069] The motion planning system 160 then can provide the selected motion plan to a vehicle control system 138 that controls one or more vehicle controls (e.g., actuators or other devices that control gas flow, steering, braking, etc.) to execute the selected motion plan.) one or more processing units configured to generate planned trajectories for the machine based on learning and operating laws for the machine represented by a UPC; (Zeng – [0031] The minimization can be approximated by sampling a set of physically valid trajectories, and picking the trajectory having the minimum cost using a cost volume. The cost volume can be a learned cost volume generated by a convolutional neural network backbone. The convolutional neural network can extract features from both the LIDAR data and the map data to generate a feature map; [0064] The motion planning system 160 can be configured to continuously update the vehicle's motion plan 162 and a corresponding planned vehicle motion trajectory; [0127] These means can include processor(s), microprocessor(s), graphics processing unit(s), logic circuit(s), dedicated circuit(s), application-specific integrated circuit(s), programmable array logic, field-programmable gate array(s), controller(s), microcontroller(s), and/or other suitable hardware) and a control unit configured to receive the planned trajectories and direct operation of the machine based on the planned trajectories. (Zeng - [0049] The vehicle 102 can be configured to operate in a plurality of operating modes. For example, the vehicle 102 can be configured to operate in a fully autonomous (e.g., self-driving) operating mode in which the vehicle 102 is controllable without user input (e.g., can drive and navigate with no input from a vehicle operator present in the vehicle 102 and/or remote from the vehicle 102) Claim 23. Zeng teaches the following limitations: The control system as recited in Claim 22, wherein the one or more processing units are integrated within a neural motion planner. (Zeng – [0144] According to an aspect of the present disclosure, the computing system 1002 can store or include one or more machine-learned models 1010. As examples, the machine-learned models 1010 can be or can otherwise include various machine-learned models such as, for example, neural networks (e.g., deep neural networks or other types of models including linear models and/or non-linear models. Example neural networks include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks, or other forms of neural networks.) Claim 24. Zeng teaches the following limitations: The control system as recited in Claim 22, where the one or more processing units include a graphics processing unit. (Zeng - [0127] These means can include processor(s), microprocessor(s), graphics processing unit(s), logic circuit(s), dedicated circuit(s), application-specific integrated circuit(s), programmable array logic, field-programmable gate array(s), controller(s), microcontroller(s), and/or other suitable hardware) Claim 25. Zeng teaches the following limitations: The control system as recited in Claim 22, wherein the machine is an autonomous vehicle. (Zeng - [Abstract] Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided.) Claim 26. Zeng teaches the following limitations: The control system as recited in Claim 22, wherein the one or more processing units generate the planned trajectories in real time and the control unit directs operation of the machine in real time based on the planned trajectories. (Zeng – [0044] By utilizing a machine-learned motion planning model that handles uncertainty as well as multimodality, an autonomous vehicle can increase the accuracy and efficiency of motion planning in real time and thereby increase the safety and reliability of autonomous vehicles.) Claim 29. Zeng teaches the following limitations: A machine, comprising: one or more operational domains; (Zeng – [0049] In some implementations, the vehicle 102 can implement vehicle operating assistance technology (e.g., collision mitigation system, power assist steering, etc.) while in the manual operating mode to help assist the vehicle operator 106 of the vehicle 102.) a motion planner having one or more neural networks configured to generate planned trajectories for the machine based on operating laws for the machine represented by a universal planning criteria; (Zeng – [0031] The minimization can be approximated by sampling a set of physically valid trajectories, and picking the trajectory having the minimum cost using a cost volume. The cost volume can be a learned cost volume generated by a convolutional neural network backbone. The convolutional neural network can extract features from both the LIDAR data and the map data to generate a feature map; [0064] The motion planning system 160 can be configured to continuously update the vehicle's motion plan 162 and a corresponding planned vehicle motion trajectory) and a control unit having one or more processors configured to receive the planned trajectories and direct operation of the one or more operational domains using commands based on the planned trajectories. (Zeng - [0049] The vehicle 102 can be configured to operate in a plurality of operating modes. For example, the vehicle 102 can be configured to operate in a fully autonomous (e.g., self-driving) operating mode in which the vehicle 102 is controllable without user input (e.g., can drive and navigate with no input from a vehicle operator present in the vehicle 102 and/or remote from the vehicle 102) Claim 30. Zeng teaches the following limitations: The machine as recited in Claim 29, wherein the one or more operational domains include at least one of a chassis domain, a powertrain domain, or a steering domain. (Zeng – [0049] In some implementations, the vehicle 102 can implement vehicle operating assistance technology (e.g., collision mitigation system, power assist steering, etc.) while in the manual operating mode to help assist the vehicle operator 106 of the vehicle 102.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3, 14-15, and 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over Zeng (US20200159225A1) in view of Tebbens (US20220187837). Claim 2. Zeng teaches the method as recited in Claim 1, however it does not explicitly teach the following limitations: wherein the scalably expressing includes expressing each rule of the traffic laws as a signal temporal logic (STL) formula However, Tebbens teaches: The method as recited in Claim 1, wherein the scalably expressing includes expressing each rule of the traffic laws as a signal temporal logic (STL) formula. (Tebbens – [0049] The reduction step associates each interval with a violation metric used to evaluate the trajectory. In an embodiment, a signal temporal logic (STL) framework is used to specify driving rules and an arithmetic-geometric mean (AGM) framework is used to score (measure the robustness of) trajectories. The STL framework uses qualitative and quantitative semantics to assess whether and how well a trajectory follows rules in a rulebook.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zeng with Tebbens in order to assess whether and how well a trajectory follows rules in a rulebook (Tebbens – [0049]) Claim 3. The combination of Zeng and Tebbens teaches the method as recited in Claim 2, and Tebbens further teaches wherein: the rules are organized in the form of a hierarchy. (Tebbens – [0062] Possible priority structures include but are not limited to: hierarchical structures (e.g., total order or pre-order on different degrees of rule violations), non-hierarchical structures (e.g., a weighting system on the rules) or a hybrid priority structure in which subsets of rules are hierarchical but rules within each subset are non-hierarchical) See claim 2 for a statement of obviousness rationale. Claim 14. Rejected under the same rationale as claim 2. Claim 15. Rejected under the same rationale as claim 3. Claim 27. Zeng teaches the following limitations: A computer program product having a series of operating instructions stored on a non-transitory computer-readable medium that directs a data processing apparatus when executed thereby to perform operations to direct operation of an intelligent machine, the operations comprising: (Zeng – [0052] For instance, the computing device(s) can include one or more processors and one or more tangible, non-transitory, computer readable media (e.g., memory devices, etc.). The one or more tangible, non-transitory, computer readable media can store instructions that when executed by the one or more processors cause the vehicle 102 (e.g., its computing system, one or more processors, etc.) to perform operations and functions, such as those described herein for identifying travel way features.) scalably expressing traffic laws and additional planning criteria in a universal planning criteria (UPC) framework, (Zeng – [0088] Access to a map can enable accurate motion planning, such as by permitting the autonomous vehicle to drive according to traffic rules (e.g., stop at a red light, follow the lane, change lanes only when allowed). Towards this goal, the backbone network can exploit high-definition maps that contain information about the semantics of the scene, such as lane location, the boundary type (e.g., solid, dashed) and the location of stop signs or other signs. In some examples, the map can be rasterized to form an M-channel tensor, where each channel represents a different map element.; [0092] A final convolution layer can be applied with a filter number T, which corresponds to planning horizon. Each filter can generate a cost volume ct for a future time step t. This allows the machine-learned motion planning model 202 to evaluate the cost of any trajectory s by simply indexing in the cost volume c.) generating, using a neural motion planner and the UPC framework, planned trajectories for the intelligent machine; and directing movement of the intelligent machine using the planned trajectories. (Zeng – [0031] The minimization can be approximated by sampling a set of physically valid trajectories, and picking the trajectory having the minimum cost using a cost volume. The cost volume can be a learned cost volume generated by a convolutional neural network backbone. The convolutional neural network can extract features from both the LIDAR data and the map data to generate a feature map; [0064] The motion planning system 160 can be configured to continuously update the vehicle's motion plan 162 and a corresponding planned vehicle motion trajectory; [0127] These means can include processor(s), microprocessor(s), graphics processing unit(s), logic circuit(s), dedicated circuit(s), application-specific integrated circuit(s), programmable array logic, field-programmable gate array(s), controller(s), microcontroller(s), and/or other suitable hardware) [0049] The vehicle 102 can be configured to operate in a plurality of operating modes. For example, the vehicle 102 can be configured to operate in a fully autonomous (e.g., self-driving) operating mode in which the vehicle 102 is controllable without user input (e.g., can drive and navigate with no input from a vehicle operator present in the vehicle 102 and/or remote from the vehicle 102) Zeng does not explicitly teach the following limitations: wherein the scalably expressing includes expressing each rule of the traffic laws as a signal temporal logic (STL) formula However, Tebbens teaches: the scalably expressing includes expressing each rule of the traffic laws as a signal temporal logic (STL) formula. (Tebbens – [0049] The reduction step associates each interval with a violation metric used to evaluate the trajectory. In an embodiment, a signal temporal logic (STL) framework is used to specify driving rules and an arithmetic-geometric mean (AGM) framework is used to score (measure the robustness of) trajectories. The STL framework uses qualitative and quantitative semantics to assess whether and how well a trajectory follows rules in a rulebook.) See claim 2 for a statement of obviousness rationale. Claim 28. The combination of Zeng and Tebbens teaches the computer program product as recited in Claim 27, and Zeng further teaches wherein: The computer program product as recited in Claim 27, wherein the intelligent machine is an autonomous vehicle. (Zeng - [Abstract] Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided.) Claims 4-6, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zeng (US20200159225A1) in view of Tebbens (US20220187837), and further in view of Martin (US20240217549). Claim 4. The combination of Zeng and Tebbens teaches the method as recited in Claim 3, however it does not explicitly teach the following limitations: wherein the scalably expressing further includes transforming the rules into a differentiable scalar reward function However, Martin teaches: The method as recited in Claim 3, wherein the scalably expressing further includes transforming the rules into a differentiable scalar reward function. (Martin – [0056] During operation of the process 400 of MCTS, for tree expansion 420, at the start of cach iteration, a TreePolicy function can determine a leaf node (e.g., nodes 450a, 450c, 450d) to expand. This leaf node can be chosen in such a way as to balance exploitation of previous state-value estimates (in the reinforcement sense) and exploration of new actions. Such a balance can be given by the upper confidence bound (UCB). The UCB formula can use an exploration constant c such that if c=0 the “best” child n.sub.c of a node n can be that which has the maximum average value, custom-character(n.sub.c)/ N(n.sub.c) where custom-character(n.sub.c) can be the total reward accumulated at node n. and N(n.sub.c) can be the number of visits of the same node) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zeng with Martin in order to provide a method for scaling computation up and down to match the desired complexity or accuracy of the solution. (Martin - [0058]) Claim 5. Zeng teaches the method as recited in Claim 1, however it does not explicitly teach the following limitations: wherein the neural motion planner uses a post-hoc trajectory pruning method However, Martin teaches: The method as recited in Claim 1, wherein the neural motion planner uses a post-hoc trajectory pruning method. (Martin – [0027] Efficiently pruning the search space can be one approach that can help to reduce computation time. Such a pruning approach can require a careful and smart strategy for a explore and exploit tradeoff.) See claim 4 for a statement of obviousness rationale. Claim 6. Zeng teaches the following limitations: The method as recited in Claim 1, wherein the neural motion planner uses an imitation learning method (Zeng – [0044] Compared with traditional machine-learned model approaches, such as imitation learning approaches that directly regress steer angle from raw sensor data, a machine-learned model in accordance with the disclosed technology may provide interpretability and handle multi-modality naturally. For instance, when compared with traditional approaches which use manually designed cost functions built on top of perception and prediction systems, a motion planning model in accordance with the disclosed technology can provide the advantage of being jointly trained. Thus, learned representations that are more optimal for the end task of motion planning can be provided.) Zeng does not explicitly teach the following limitations: with a UPC reward However, Martin teaches: wherein the neural motion planner uses an imitation learning method with a UPC reward (Martin – [0056] The UCB formula can use an exploration constant c such that if c=0 the “best” child n.sub.c of a node n can be that which has the maximum average value, custom-character(n.sub.c)/ N(n.sub.c) where custom-character(n.sub.c) can be the total reward accumulated at node n. and N(n.sub.c) can be the number of visits of the same node) See claim 4 for a statement of obviousness rationale. Claim 16. Rejected under the same rationale as claim 4. Claim 20. Rejected under the same rationale as claim 5. Claims 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zeng (US20200159225A1) in view of Li (US20220277652). Claim 17. Zeng teaches the method as recited in Claim 12, but does not explicitly teach the following limitations: wherein the embedding is explicit. However, Li teaches: The method as recited in Claim 12, wherein the embedding is explicit. (Li – [0008] In some embodiments, the inputting the plurality current features associated with the vehicle into the trained neural network comprises: inputting the one grid cell in which the vehicle is currently located into a mask-based embedding layer of the neural network to obtain an embedded vector representation of the one grid cell) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zeng with Li in order to predict the conditional action values of repositioning options for a vehicle. (Li – [0078]) Claim 19. Zeng teaches the method as recited in Claim 12, but does not explicitly teach the following limitations: wherein the embedding is via a UPC rule robustness vector. However, Li teaches: The method as recited in Claim 12, wherein the embedding is via a UPC rule robustness vector. (Li – [0080] The purpose of performing cerebellar embedding to some of the input features may include obtaining distributed, robust, and generalizable feature representations of the features. In some embodiments, to better ensure the robustness of the neural network against input perturbations) See claim 17 for a statement of obviousness rationale. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Zeng (US20200159225A1) in view of Efrat (US20210276574) Claim 18. Zeng teaches the method as recited in Claim 12, but does not explicitly teach the following limitations: wherein the embedding is via a UPC probability vector. However, Efrat teaches: The method as recited in Claim 12, wherein the embedding is via a UPC probability vector. (Efrat – [Abstract] - The concatenation, or clustering is accomplished via the feature embeddings. [0060] The angle bin estimation is optimized using a soft multi-label objective, and the ground truth is calculated as the segment's angle proximity to the a bin centers, e.g. for θ.sub.seg=0 the ground truth class probability vector would be pα=(1, 0, 0, 0) and for θ.sub.seg=π the probability vector would be pα=(0.5, 0.5, 0, 0). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zeng with Efrat in order to provide a deep learning approach to unify the feature extraction process and the classification step through several layers of an artificial neural network. (Efrat – [0043]) Allowable Subject Matter Claims 7-9 would be allowable if rewritten to include all of the limitations of the base claim and any intervening claims. Claims 7-9 are objected for its dependency on the rejected base claim, but would otherwise be allowable if rewritten to include all the limitations of the base claim and any intervening claims. The allowable subject matter found in the claim that have not been found to have been adequately taught or disclosed in the prior art found at this time are all the limitations as specifically claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT FENG whose telephone number is (703)756-4715. The examiner can normally be reached M-F 8:00AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NAVID MEHDIZADEH can be reached on (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT FENG/Examiner, Art Unit 3669 /NAVID Z. MEHDIZADEH/Supervisory Patent Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
Dec 03, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 8813663
SEEDING MACHINE WITH SEED DELIVERY SYSTEM
2y 5m to grant Granted Aug 26, 2014
Patent null
Interconnection module of the ornamental electrical molding
Granted
Patent null
SYSTEMS AND METHODS FOR ENTITY SPECIFIC, DATA CAPTURE AND EXCHANGE OVER A NETWORK
Granted
Patent null
Systems and Methods for Performing Workflow
Granted
Patent null
DISTRIBUTED LEDGER PROTOCOL TO INCENTIVIZE TRANSACTIONAL AND NON-TRANSACTIONAL COMMERCE
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
4%
Grant Probability
5%
With Interview (+1.5%)
1y 1m
Median Time to Grant
Low
PTA Risk
Based on 142 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month