Prosecution Insights
Last updated: April 19, 2026
Application No. 18/352,645

SWITCHED NEURAL NETWORKS

Non-Final OA §103
Filed
Jul 14, 2023
Examiner
CASANOVA, JORGE A
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
GM Cruise Holdings LLC
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
664 granted / 783 resolved
+29.8% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
14 currently pending
Career history
797
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 783 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. This Office action is Non-Final. Information Disclosure Statement -The information disclosure statement (IDS) filed on 07/13/2023 has been considered by the Examiner and made of record in the application file. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Djuric et al. (US 2018/0107215 A1) hereinafter “Djuric”, further in view of Nosko et al. (US 2021/0042612 A1) hereinafter “Nosko”. With respect to claims 1, 8 and 15, the Djuric reference discloses a method, system and non-transitory computer-readable storage medium [see Abstract, disclosing a neural network may be utilized for autonomously driving a self-driving vehicle (SDV)] comprising: at least one memory [see ¶0018, disclosing instructions can be stored in one or more memory resources of the computing device]; and at least one processor coupled to the at least one memory, the at least one processor configured [see ¶0021, disclosing implemented through the use of instructions that are executable by one or more processors] to: determine an operating condition for an autonomous vehicle [see ¶0028, disclosing the SDV 100 can be equipped with multiple types of sensors 101, 103, 105 which can combine to provide a computerized perception of the space and the physical environment surrounding the SDV 100]; and perform the task using the sub-neural network configured for the operating condition [see ¶0028, disclosing the control system 120 can analyze the sensor data 111 to generate low level commands 135 executable by one or more controllers 140 that directly control the acceleration system 152, steering system 154, and braking system 156 of the SDV 100; Execution of the commands 135 by the controllers 140 can result in throttle inputs, braking inputs, and steering inputs that collectively cause the SDV 100 to operate along sequential road segments to a particular destination]. (emphasis added) Djuric discloses the method, system and non-transitory computer-readable storage medium, as referenced above. Djuric does not explicitly discloses determine, from a plurality of sub-neural networks of a neural network, a sub-neural network configured for the operating condition, wherein each of the plurality of sub-neural networks is configured to perform a task based on a different operating condition for the autonomous vehicle. (emphasis added) However, Nosko discloses determine, from a plurality of sub-neural networks of a neural network, a sub-neural network configured for the operating condition, wherein each of the plurality of sub-neural networks is configured to perform a task based on a different operating condition [see ¶0019, disclosing analyzing a task to be performed; deciding required properties of a dedicated neural network for performing the task; identifying suitable network modules according to the known training conditions; and linking the identified network modules to construct a dedicated network] for the autonomous vehicle (emphasis added) It would have been obvious before the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to apply the known conditional neural-module selection as disclosed by Nosko to the known autonomous-vehicle neural-network control of Djuric in order to achieve reduced computation, improved efficiency and predictable performance improvements. See KSR International Co. v. Teleflex Inc., 82 USPQ2d 1385 (U.S. 2007). With respect to claims 2, 9 and 16, as modified the combination of Djuric and Nosko discloses the method, system and non-transitory computer-readable storage medium of claims 1, 8 and 15, as referenced above. The combination further discloses wherein the task comprises at least one of object detection, object tracking, object prediction, path planning, navigation, object recognition, object classification, semantic segmentation, panoptic segmentation, distance estimation, and motion planning [Djuric, see Abstract, disclosing utilizing the sensor data, the neural network can operate acceleration, braking, and steering systems of the SDV to continuously follow the one or more navigation points along an established route to the destination location]. With respect to claims 3, 10 and 17, as modified the combination of Djuric and Nosko discloses the method, system and non-transitory computer-readable storage medium of claims 1, 8 and 15, as referenced above. The combination further discloses wherein the operating condition comprises at least one of a time- of-day condition, a weather condition, a geographic condition, a road condition, and a traffic condition [Djuric, see ¶0040, disclosing the optimal route 242 can comprise a route that minimizes distance or time with regards to traffic conditions, speed limits, traffic signals, intersections, and the like]. With respect to claims 4, 11 and 18, as modified the combination of Djuric and Nosko discloses the method, system and non-transitory computer-readable storage medium of claims 1, 8 and 15, as referenced above. The combination further discloses wherein two or more sub-neural networks of the plurality of sub-neural networks are configured to perform the task based on overlapping operating conditions [Nosko, see ¶0046, disclosing processor 14 executes two or more different modular networks, i.e. different combinations of neural network modules, to perform a neural network process with the same input data]. With respect to claims 5, 12 and 19, as modified the combination of Djuric and Nosko discloses the method, system and non-transitory computer-readable storage medium of claims 1, 8 and 15, as referenced above. The combination further discloses wherein determining the sub-neural network from the plurality of sub-neural networks of the neural network is based on at least one of a threshold and a change from a first operating condition to a second operation condition [Nosko, see ¶0046, disclosing processor 14 may choose between the modular networks according to determined criteria, for example after receiving the results of the neural network process from each of the modular networks. For example, processor 14 may receive from each of the modular networks a result of the process, calculate a confidence level for each of the results, and rank the results accordingly and/or select the modular network that provides the better confidence level]. With respect to claims 6, 13 and 20, as modified the combination of Djuric and Nosko discloses the method, system and non-transitory computer-readable storage medium of claims 1, 8 and 15, as referenced above. The combination further discloses loading the sub-neural network into at least one of a memory and a cache of an autonomous vehicle [Djuric, see ¶0073, disclosing the memory 661 may store a set of software instructions and/or machine learning algorithms including, for example, the machine learning model 662; The memory 661 may also store road network maps 664 in which the processing resources 610—executing the machine learning model 662—can utilize to extract and follow navigation points (e.g., via location-based signals from a GPS module 640), introduce noise to the navigation point signals, determine successive route plans, and execute control actions on the SDV; The machine learning model 662 may be executed by the neural network array 617 in order to autonomously operate the SDV's acceleration 622, braking 624, steering 626, and signaling systems 628 (collectively, the control mechanisms 620); Thus, in executing the machine learning model 662, the neural network array 617 can make mid or high level decisions with regard to upcoming route segments, and the processing resources 610 can receive sensor data 632 from the sensor systems 630 to enable the neural network array 617 to dynamically generate low level control commands 615 for operative control over the acceleration, steering, and braking of the SDV; The neural network array 317 may then transmit the control commands 615 to one or more control interfaces 622 of the control mechanisms 620 to autonomously operate the SDV through road traffic on roads and highways, as described throughout the present disclosure]. With respect to claims 7 and 14, as modified the combination of Djuric and Nosko discloses the method and system of claims 1 and 8, as referenced above. The combination further discloses wherein each sub-neural network from the plurality of sub-neural networks of the neural network is configured to perform the task based on a different combination of operating conditions associated with the autonomous vehicle [Djuric, see ¶0028, disclosing the SDV 100 can be equipped with multiple types of sensors 101, 103, 105 which can combine to provide a computerized perception of the space and the physical environment surrounding the SDV 100; and Nosko, see ¶0046, disclosing processor 14 executes two or more different modular networks, i.e. different combinations of neural network modules, to perform a neural network process with the same input data]. Prior Art Made of Record The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yu (‘825) discloses neural network architecture construction. Yu (‘905) discloses compressing neural networks. Tasinga et al. discloses neural network scheduler. Arroyo et al. discloses autonomous vehicle perception multimodal sensor data management. Lu et al. discloses adaptive perception by vehicle sensors. Ogale et al. discloses neural networks for vehicle trajectory planning. Luo et al. discloses cooperative learning neural networks and systems. Lee et al. discloses selective attention method using neural network. Conclusions/Points of Contacts Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CASANOVA whose telephone number is (571)270-3563. The examiner can normally be reached M-F: 9 a.m. to 6 p.m. (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORGE A CASANOVA/Primary Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Jul 14, 2023
Application Filed
Feb 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596748
GRAPH DATABASE STORAGE ENGINE
2y 5m to grant Granted Apr 07, 2026
Patent 12591620
TEMPORAL GRAPH ANALYTICS ON PERSISTENT MEMORY
2y 5m to grant Granted Mar 31, 2026
Patent 12566798
CAUSAL ANALYSIS WITH TIME SERIES DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12554734
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Feb 17, 2026
Patent 12554739
CONFIGURATION-DRIVEN EFFICIENT TRANSFORMATION OF FORMATS AND OBJECT STRUCTURES FOR DATA SPECIFICATIONS IN COMPUTING SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+20.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 783 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month