Prosecution Insights
Last updated: April 19, 2026
Application No. 17/937,772

NEURAL NETWORK APPLICATIONS IN RESOURCE CONSTRAINED ENVIRONMENTS

Non-Final OA §103§DP
Filed
Oct 03, 2022
Examiner
CHEN, ALAN S
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Alpine Electronics of Silicon Valley, Inc.
OA Round
2 (Non-Final)
91%
Grant Probability
Favorable
2-3
OA Rounds
2y 11m
To Grant
97%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1025 granted / 1126 resolved
+36.0% vs TC avg
Moderate +6% lift
Without
With
+6.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
22 currently pending
Career history
1148
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
20.8%
-19.2% vs TC avg
§102
37.5%
-2.5% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1126 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/5/2026 has been entered. Response to Arguments Applicant’s arguments in light of the amendment filed on 1/5/2026 with respect to the statutory double patenting rejection have been fully considered and are persuasive. The statutory double patenting rejection of claims 1-20 has been withdrawn. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: ‘DISTRIBUTED NEURAL NETWORK SYSTEM FOR VEHICLE OCCUPANT MONITORING AND CONTROL’. The disclosure is objected to because of the following informalities: In ¶5…”incudes” should be ‘includes’ In page 6, last two lines, “In such embodiments, the automobile.” Appears extraneous In ¶23…”configured to produced third sensor data” should be ‘configured t produce third sensor data’ In ¶24…”configured to produced third sensor data” should be ‘configured t produce third sensor data’ Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5, 8, 9, 11-17, 19 and 20 are rejected under 35 USC 103 as being unpatentable over US Pat. Pub. No. 6,236,908 to Cheng et al. (hereinafter Cheng) in view of Visualizing and Understanding Convolutional Networks to Zeiler et al. (hereinafter Zeiler). Per claim 1, Cheng discloses A system (fig. 1 and col. 2, lns 48-59… control system 10) comprising: one or more sensors located in an environment configured to generate first sensor data of the environment (fig. 1 and col. 2:48-col. 4:8 …various physical sensors 16 and virtual sensors 68 located in an vehicle environment may generate sensor data of the vehicle, any one or which can be construed as first sensor data); one or more computing devices (fig. 1:14 and col. 3:14-27…controller 14 is a computing device, “Controller 14 of FIG. 1 preferably includes a microprocessor unit (MPU) 42 in communication with various computer-readable storage media”) configured to receive a neural network (fig. 1:68 and col. 3:14-27…controller stores/receives a neural network 68), wherein the one or more computing devices are configured to receive the first sensor data and determine a state of the environment based on input of the first sensor data to the neural network (col. 3:46-55…controller receives signals from physical sensors (first sensor data) and is input to the neural network to determine a second operating parameter (state of the environment), such as residual mass fraction (RMF), “Virtual sensor 68 may be used to determine a value for an engine operating parameter which is difficult or costly to measure directly. Values for various physical-based parameters are input to virtual sensor 68, such as those values which represent the physical signals generated by physical sensors 16”; col. 6 24-33, “RMF is the fundamental engine parameter which is being controlled. The present invention may be utilized to develop a more robust control strategy since RMF may be dynamically determined and therefore controlled. Once the value for a particular parameter (RMF in this case) can be directly determined, traditional control techniques may be applied, i.e. comparing the measured value to the optimal, and adjusting either EGR or valve timing to correct any deviation”); and a controller configured to control a device in the environment based on a result of the determination of the state of the environment by the one or more computing devices (fig. 1:14 and col. 3:27-67…controller 14 controls vehicle components, e.g., devices in the environment,0such as "fuel injectors," "spark controller," or "EGR valve" based on the second operating parameter, e.g., state of the environment, determined by the neural network, “Microprocessor 42 generates control and command signals which are communicated via output ports 54 to various actuators, indicated generally by reference numeral 56. Actuators may include a fuel controller 58 which provides appropriate signals for one or more fuel injectors (not specifically illustrated). Other actuators may include a spark controller 60 and an exhaust gas recirculation (EGR) valve 62. EGR valve 62 is used to control the amount of exhaust gases routed from exhaust 28 to intake 64 via plumbing 66... Virtual sensor 68 may be used to determine a value for an engine operating parameter which is difficult or costly to measure directly. Values for various physical-based parameters are input to virtual sensor 68, such as those values which represent the physical signals generated by physical sensors 16. Various other signals may provide input to virtual sensor 68 to dynamically determine values for various engine operating parameters. Such signals may be indicative of air/fuel ratio, cam timing, air charge temperature, oil temperature, and the like…The output from one or more virtual sensors enables controller 14 to better account for the internal processes of engine 12. This information may be used to improve control of engine 12 by adjusting various functions 65 such as spark timing, EGR level, fuel injection timing, cam timing, or fuel pulsewidth to minimize fuel consumption,…”), … Cheng does not expressly disclose, but with Zeiler does teach: wherein the one or more computing devices are configured to calculate an activation area for the neural network (Zeiler: fig. 6… method for "Visualizing and Understanding Convolutional Networks" to identify which parts of input data stimulate the network by calculating which portions of an input are responsible for activation, effectively an "activation area" or discriminative parts of the neural network, “we systematically cover up different portions of the scene with a gray square (1st column) and see how the top (layer 5) feature maps ((b) & (c)) and classifier output ((d) & (e)) changes. (b): for each position of the gray scale, we record the total activation in one layer 5 feature map (the one with the strongest response in the unoccluded image). (c): a visualization of this feature map projected down into the input image (black square), along with visualizations of this map from other images”; Zeiler: Section 4.2…”With image classification approaches, a natural question is if the model is truly identifying the location of the object in the image, or just using the surrounding context. Fig. 6 attempts to answer this question by systematically occluding different portions of the input image with a grey square, and monitoring the output of the classifier. The examples clearly show the model is localizing the objects within the scene, as the probability of the correct class drops significantly when the object is occluded. Fig. 6 also shows visualizations from the strongest feature map of the top convolution layer, in addition to activity in this map (summed over spatial locations) as a function of occluder position”; Zeiler teaches the calculation of an activation area (feature map activity/sensitivity) to understand neural network operation. Combining this with Cheng would allow the controller to determine which sensors are driving the state determination), wherein the one or more computing devices are configured to calculate the activation area for the neural network, at least in part, by iteratively providing each of a plurality of sensor data as input to the neural network (Zeiler: Section 4.2…using "Occlusion Sensitivity" analysis that "systematically occlude different portions of the input image", demonstrating iteratively providing modified/occluded data and passing it to the neural network to map sensitivity), wherein each sensor data of the plurality of sensor data is formed by placing a mask at a corresponding location in the first sensor data (Zeiler: Section 4.2…masking input data by "occluding different portions of the input image with a grey square" (placing a mask) at various locations, specifically teaches the formation of the iterative test data by placing a mask (grey square) at corresponding locations in the original input data; Zeiler: fig. 6…” we systematically cover up different portions of the scene with a gray square (1st column) and see how the top (layer 5) feature maps ((b) & (c)) and classifier output ((d) & (e)) changes”), wherein the one or more computing devices are configured to calculate the activation area for the neural network, at least in part, by comparing a first result of providing the first sensor data as input to the neural network to a corresponding result of providing each of the plurality of sensor data as input to the neural network (Zeiler: Section 4.2…performing monitoring of the output of the classifier, e.g., neural network, when the input is occluded and comparing it to the original output, “Fig. 6 attempts to answer this question by systematically occluding different portions of the input image with a grey square, and monitoring the output of the classifier”; Zeiler: Section 4.2…comparison step of comparing the network output (result) of the masked input to the original output to determine the importance (activation area) of the masked region, “The examples clearly show the model is localizing the objects within the scene, as the probability of the correct class drops significantly when the object is occluded. Fig. 6 also shows visualizations from the strongest feature map of the top convolution layer, in addition to activity in this map (summed over spatial locations) as a function of occluder position. When the occluder covers the image region that appears in the visualization, we see a strong drop in activity in the feature map”). Cheng and Zeiler are analogous art because they both involve neural network systems and analysis. Cheng applies neural networks to vehicle control systems, while Zeiler focuses on visualizing and diagnosing the operation of neural networks, specifically convolutional networks. Both references address the problem of utilizing and understanding neural network outputs. Cheng seeks to use neural networks to estimate complex states where physical sensors are lacking by using virtual sensors. Zeiler seeks to diagnose neural network behavior and understand "why they perform so well" or how they function by identifying which inputs activate the network. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art (PHOSITA) when developing neural network sensors like in Cheng, to utilize the neural network analysis techniques taught by Zeiler to validate, debug, or improve the neural network's robustness. The suggestion/motivation for doing so would have been that Cheng utilizes a neural network as a "black box" to control engine parameters. However, Cheng notes that actual systems involve "design compromises" and require "accurate training" (Cheng: Background Art and Summary of Invention). A PHOSITA would be motivated to combine Zeigler’s "occlusion sensitivity" (masking) technique with Cheng’s system to provide diagnostic capabilities or robustness checks. By implementing Zeigler’s masking technique, the Cheng controller could verify which physical sensors (e.g., MAF vs. RPM) are driving the neural network’s output. This is critical in automotive safety to ensure the virtual sensor is relying on appropriate physical signals and to detect sensor faults, e.g., if a sensor fails, does the activation area shift inappropriately? Zeiler expressly suggests using these visualization/analysis tools in a "diagnostic role" to find problems or improve architectures (Zeiler: Abstract). It would be obvious to a PHOSITA to apply Zeigler’s general method for determining input importance (iterative masking and comparing results) to the specific neural network inputs in Cheng (sensor data vectors) to calculate an "activation area" (i.e., the subset of sensors actively determining the state). This allows the controller to confirm that the "virtual sensor" is functioning correctly based on valid sensor data, enhancing the reliability of the system described in Cheng. Per claim 2, Cheng combined with Zeiler discloses claim 1, Cheng further disclosing the one or more computing devices comprise: a local computing device located in the environment (Cheng: fig. 1 and col. 2:48-59…the "controller 14" (local computing device) that is "in communication with" the engine and sensors, where the controller is located within the vehicle environment to perform real-time monitoring and control, being a local computing device); and a remote computing device not located in the environment (Cheng: fig. 3 and col. 5, lns 12-23…a manufacturing/development process where data is generated, a simulation model is calibrated, and the neural network is trained, this training occurs prior to the network being embedded in the controller, where the computer performing the simulation and training (remote device) is distinct from and not located in the vehicle environment during operation (local device), “The simulation program can then be used to interpolate or extrapolate 15 a more complete set of data as represented by block 104. This comprehensive map characterizes performance of the vehicle component as a function of predetermined design and control parameters. This information is then used to program or train the neural network-based virtual sensor as 20 represented by block 106. The sensor is then embedded in the controller in the form of data and instructions as represented by block 108”). Per claim 3, Cheng combined with Zeiler discloses claim 2, Cheng further disclosing the remote computing device is configured to generate the neural network based on second sensor data of the environment (Cheng: fig. 3 and col. 5:5-30… manufacturing/training the neural network in part by use of test data (second sensor data) that is generated during operation of the component (e.g., on a dynamometer), the test data is used to calibrate a simulation model, which generates maps to train neural network, and training process occurs prior to embedding the network and is performed by a separate system (remote computing device) distinct from the vehicle controller), and wherein the local computing device is configured to determine the state of the environment (Cheng: col. 2, lns 5-11…"The trained neural network is embedded in the controller", e.g., local computing device; Cheng: col. 7, lns 21-28…during operation in the vehicle, "the controller interrogates the virtual sensor" to determine the operating parameter (state of the environment)). Per claim 4, Cheng combined with Zeiler discloses claim 3, Cheng combined with Zeiler further disclosing the local computing device (Cheng: fig. 1…the "local computing device" (controller 14) which includes a "microprocessor 42" and memory to process values; Cheng: col. 1, lns 27-40…acknowledgment of the difficulty in diagnosing sensor malfunctions in actual systems, “The deployment of more and more physical sensors results in per-unit cost penalties in development and manufacturing. Replacement and repair costs also rise due to the increased number of sensors and difficulty in diagnosing sensor malfunctions. As such, actual systems typically involve design compromises to accommodate technological difficulties and reduce the cost and complexity of the physical system employed to monitor and control the engine”) is configured to calculate the activation area for the neural network (Zeiler: Section 1…calculating the activation area (occlusion sensitivity) to diagnose potential problems, “In this paper we introduce a visualization technique that reveals the input stimuli that excite individual feature maps at any layer in the model. It also allows us to observe the evolution of features during training and to diagnose potential problems with the model”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art (PHOSITA) to configure Cheng's local controller to run Zeigler’s diagnostic method. By programming the local microprocessor to iteratively mask sensor inputs (e.g., during a self-check or OBD routine) and compare the outputs, the local device calculates the activation area to verify the virtual sensor is relying on valid sensor data (effectiveness) rather than noise or a failing sensor. Per claim 5, Cheng combined with Zeiler discloses claim 3, Cheng combined with Zeiler further disclosing the remote computing device (Cheng: fig. 3 and col. 5, lns 12-23…a remote computer for "generating test data," "calibrating a simulation model," and "training neural net(s)" before embedding it in the vehicle controller) is configured to calculate the activation area for the neural network (Zeiler: Section 1…calculating the activation area (occlusion sensitivity) to diagnose potential problems, “In this paper we introduce a visualization technique that reveals the input stimuli that excite individual feature maps at any layer in the model. It also allows us to observe the evolution of features during training and to diagnose potential problems with the model”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art (PHOSITA) to configure Cheng's remote training system to utilize Zeigler’s diagnostic method. By running the "activation area" calculation (occlusion sensitivity) on the remote device during the design/training phase, a developer can validate the model architecture and ensure the "virtual sensor" relies on the correct physical sensors (e.g., MAF vs. RPM) before the network is finalized and embedded in the vehicle. Per claim 8, Cheng combined with Zeiler discloses claim 1, Cheng combined with Zeiler further disclosing wherein the one or more computing devices are configured to calculate the activation area for a first prediction class of the neural network (Zeiler: Section 2 and fig. 6…using a neural network classifier that outputs a "probability vector... over the C different classes", demonstrating calculating of occlusion sensitivity (activation area) for a specific class (the "correct class" or first prediction class), such as "Pomeranian" or "Car Wheel"), and wherein the one or more computing devices are configured to calculate a second activation area for a second prediction class of the neural network (Zeiler: fig. 6…monitoring the "most probable class" as the mask is moved, fig. 6e shows that masking certain features causes the prediction to switch from a first class ("Pomeranian") to a second class ("Tennis Ball"), wherein to determine this switch, the system intrinsically calculates the activation/probability for the second class ("Tennis Ball") in response to the masked data. A PHOSITA would understand that the occlusion sensitivity method can be run for any class in the probability vector to generate that class's specific activation area). A PHOSITA implementing Cheng's "Virtual Sensor" would be motivated to ensure the neural network distinguishes between different engine states (classes) reliably. For example, distinguishing between "Normal Operation" (First Class) and "Engine Knock" or "Instability" (Second Class). To validate that the network is correctly distinguishing these states based on the appropriate physical sensors (and not spurious noise), the PHOSITA would look to diagnostic methods like Zeigler’s. Per claim 9, Cheng combined with Zeiler discloses claim 8, Cheng combined with Zeiler further disclosing wherein the one or more computing devices are configured to compare the activation area to an expected activation area for the first prediction class (Zeiler: Section 4.2…verifying the model by comparing the calculated sensitivity (activation area) to the actual object location or feature visualization (expected activation area), "When the occluder covers the image region that appears in the visualization... we see a strong drop in activity... validating the other visualizations", using this to confirm the model is "truly identifying the location of the object" (expected area) rather than "surrounding context"); and wherein the one or more computing devices are configured to compare the second activation area to an expected activation area for the second prediction class (Zeiler: fig. 6...demonstrating validation for multiple distinct classes where fig. 6 shows comparison for a "Pomeranian" (First Class), a "Car Wheel" (Second Class), and an "Afghan Hound" (Third Class), in each case, the calculated occlusion map is compared against the expected location of the object (the dog's face, the wheel, etc.) to validate the predictor). A PHOSITA implementing the "virtual sensor" in Cheng would be motivated to verify that the neural network is functioning correctly before deploying it in a vehicle. Specifically, they would want to ensure the "virtual sensor" is calculating the engine state based on the correct physical signals (the expected activation area, e.g., the MAF sensor) rather than spurious correlations or noise. Zeiler provides the specific motivation to use occlusion sensitivity in a "diagnostic role" to validate that the model is "truly identifying the location of the object" (the expected features). Per claim 11, Cheng combined with Zeiler discloses claim 1, Cheng combined with Zeiler further disclosing the one or more computing devices are configured to compare the first result to the corresponding result, at least in part, by determining if a prediction class provided as part of the corresponding result is different from a prediction class provided as part of the first result (Zeiler: fig. 6…monitoring the "most probable class" (prediction class) as the mask is moved across the input, fig. 6(e) illustrates that when specific features are occluded (e.g., the dog's face), the prediction class flips from the original class ("Pomeranian") to a different class ("Tennis Ball") and using this determination (checking if the class is different) to analyze what features the network is using for classification). A PHOSITA implementing Cheng's "Virtual Sensor" would be motivated to verify that the neural network is robust and capable of distinguishing between different operating states (classes) reliably. For instance, determining if the engine is in a "Normal" state versus a "Fault" state. To validate the network, the PHOSITA would look to diagnostic methods like Zeigler’s to ensure the network is not fragile. Per claim 12, Cheng combined with Zeiler discloses claim 1, Cheng combined with Zeiler further disclosing wherein the one or more computing devices are configured to compare the first result to the second corresponding result, at least in part, by determining if a second prediction confidence level provided as part of the second corresponding result is different from a first prediction confidence level provided as part of the first result (Section 4.2…monitoring the "probability of the correct class" (prediction confidence level) as the mask is moved across the input such that "when the object is occluded... the probability of the correct class drops significantly" showing the comparing of the confidence level of the masked result (second/corresponding result) to the confidence level of the original result (first result) to determine if they are different, e.g., if the probability drops; fig. 6…"(d): map of correct class probability, as a function of the position of the gray square". A PHOSITA implementing Cheng's "Virtual Sensor" would be motivated to ensure the neural network is robust and relies on valid physical correlations rather than noise. Cheng notes that actual systems involve "design compromises". Zeiler provides a specific diagnostic tool (Occlusion Sensitivity) to measure the importance of input features by monitoring the confidence level (probability) of the output. Per claim 13, Cheng combined with Zeiler discloses claim 12, Cheng combined with Zeiler further disclosing wherein the one or more computing devices are configured to determine if the second prediction confidence level is different from the first prediction confidence level, at least in part, by determining if a difference between the second prediction confidence level and the first prediction confidence level is greater than a predefined threshold value (Section 4.2…analyzing the occlusion results to find where the probability of the correct class "drops significantly" or shows a "strong drop", where determining if a value (the drop in probability) is "significant" or "strong" necessarily implies comparing the magnitude of that difference to a predefined threshold value (e.g., a noise floor or significance level) to distinguish relevant drops from negligible fluctuations). A PHOSITA implementing Zeigler’s "diagnostic role" would intrinsically use a threshold to define the "activation area" (the blue region in Fig. 6d) distinct from the background. Furthermore, a PHOSITA developing the Cheng system would be motivated to ensure the "virtual sensor" output is reliable and not jittering due to insignificant noise. When implementing Zeigler’s occlusion sensitivity check to validate the sensor, the developer would need a criterion/threshold to decide when a sensor is "important" versus "irrelevant." Per claim 14, Cheng combined with Zeiler discloses claim 1, Cheng combined with Zeiler further disclosing wherein the one or more computing devices are configured to determine an effectiveness value for the neural network structure based at least in part on the calculated activation area (Section 1…using the calculated activation areas (visualizations/occlusion maps) in a diagnostic role to "diagnose potential problems with the model”; Section 4… analysis of activation areas to determine if the model is "truly identifying the location of the object" (effective) or just using "surrounding context" (ineffective), this diagnostic process intrinsically involves determining an effectiveness value or assessment of the current neural network structure), and wherein the one or more computing devices are configured, based at least in part on the determined effectiveness value, to generate a second neural network structure (Section 1…using the diagnostic results (determined effectiveness) to "find model architectures that outperform" existing ones; Section 4.1… specifically using the visualizations to identify "aliasing artifacts" in the first structure (Krizhevsky's architecture) and, based on this finding, generated a second neural network structure with different parameters such as reduced filter size 11x11 to 7x7 and stride 4 to 2, that outperforms the first neural network structure, “While visualization of a trained model gives insight into its operation, it can also assist with selecting good architectures in the first place. By visualizing the first and second layers of Krizhevsky et al. ’s architecture (Fig. 5(a) & (c)), various problems are apparent. The first layer filters are a mix of extremely high and low frequency information, with little coverage of the mid frequencies. Additionally, the 2nd layer visualization shows aliasing artifacts caused by the large stride 4 used in the 1st layer convolutions. To remedy these problems, we (I) reduced the 1st layer filter size from 11x11 to 7x7 and (ii) made the stride of the convolution 2, rather than 4”). A PHOSITA implementing Cheng's "Virtual Sensor" would be motivated to optimize the neural network structure to ensure the highest possible accuracy for vehicle control. Cheng notes that actual systems involve "design compromises" (Cheng: Background Art). A PHOSITA would look to diagnostic techniques like Zeigler’s to evaluate the "effectiveness" of the initial neural network structure. Claims 15, 16, 17, 19 and 20 are substantially similar in scope and spirit to claims 1, 8, 9, 11 and 12, respectively. Therefore, the rejections of claims 1, 8, 9, 11 and 12 are applied accordingly. Allowable Subject Matter Claims 6, 7, 10 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is the statement of reasons for the indication of allowable subject matter: The prior art disclosed by the applicant and cited by the Examiner fail to teach or suggest, alone or in combination, all the limitations of the independent and intervening claims (claims 1, 9 and 17) further including the particular notable limitations provided below: Claims 6-7: the environment is an automobile, and wherein the first sensor data comprises images of an interior of the automobile. Claims 10 and 18: the one or more computing devices are configured, based at least in part on a result of comparing the activation area to the expected activation area for the first prediction class, to instruct the one or more sensors to generate additional sensor data for the first prediction class. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Patents and/or related publications are cited in the Notice of References Cited (Form PTO-892) attached to this action to further show the state of the art with respect to distributed neural network system for vehicle monitoring and control. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN CHEN whose telephone number is (571)272-4143. The examiner can normally be reached M-F 10-7. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN CHEN/ Primary Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Oct 03, 2022
Application Filed
Jul 30, 2025
Final Rejection — §103, §DP
Jan 05, 2026
Request for Continued Examination
Jan 22, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596942
BLACK-BOX EXPLAINER FOR TIME SERIES FORECASTING
2y 5m to grant Granted Apr 07, 2026
Patent 12596084
MACHINE LEARNING FOR HIGH-ENERGY INTERACTIONS ANALYSIS
2y 5m to grant Granted Apr 07, 2026
Patent 12596929
INTEGRATED CIRCUIT WITH DYNAMIC FUSING OF NEURAL NETWORK BRANCH STRUCTURES BY TOPOLOGICAL SEQUENCING
2y 5m to grant Granted Apr 07, 2026
Patent 12591777
PARSIMONIOUS INFERENCE ON CONVOLUTIONAL NEURAL NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12585930
NPU FOR GENERATING FEATURE MAP BASED ON COEFFICIENTS AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
91%
Grant Probability
97%
With Interview (+6.3%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 1126 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month