Prosecution Insights
Last updated: April 19, 2026
Application No. 18/178,988

NEURAL NETWORK APPLICATIONS IN RESOURCE CONSTRAINED ENVIRONMENTS

Non-Final OA §103§112
Filed
Mar 06, 2023
Examiner
CHEN, ALAN S
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Alpine Electronics of Silicon Valley, Inc.
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
97%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1025 granted / 1126 resolved
+36.0% vs TC avg
Moderate +6% lift
Without
With
+6.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
22 currently pending
Career history
1148
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
20.8%
-19.2% vs TC avg
§102
37.5%
-2.5% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1126 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: ‘Driver State Monitoring and Vehicle Control System Using Neural Networks’. The disclosure is objected to because of the following informalities: In ¶5…”the system further incudes” should be ‘the system further incudes’ In ¶86…”if output node 675 has a combined input values greater than” should be ‘if output node 675 has a combined input value greater than’ In ¶106…”for the an image of the interior” should be ‘for the image of the interior’ In ¶116…”sterevision” should be ‘stereovision’ In ¶117…”a train an update neural network structure” should be ‘and train an updated neural network structure’ In ¶123…”gryroscope” should be ‘gyroscope’ In ¶124…”sensor 760” should be ‘sensor 769’ In ¶134…”enhancing brightness an image” should be ‘enhancing brightness of an image’ In ¶134…”combining images from several sensor” should be ‘combining images from several sensors’ In ¶156…”(e.g., CDMA2000, GSM, 4G LTE) transceiver.” is missing closing parenthesis. In ¶176…”the processor 1208 may be determine based on” should be ‘the processor 1208 may be determined based on’ In ¶178…”a driving mode transition rule may specific that” should be ‘a driving mode transition rule may specify that’ In ¶213…”accidently” should be ‘accidentally’ In ¶241…”the vehicle state is "a velocity of zero.” is missing a closing quote. In ¶250…”The sensor 1716 may transmit a an image of the interior” should be ‘The sensor 1716 may transmit an image of the interior’ Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Per claims 1 and 15, “a state of a person” is broad and abstract. It is unclear whether the state of a person is referring to a physiological state, i.e., awake, asleep, drowsy, etc., a psychological states, i.e., angry, sad, calm, etc., a physical posture, i.e., sitting, standing, etc., or an activity, i.e., driving, texting, etc. All of these states can possibly be determined by portions of the body being monitored. To expedite prosecution, Examiner interprets this limitation to be associated with physical activity or attentiveness level of a person in order to provide objective boundaries. Claims 2-14 and 16-20 are rejected as being dependent upon a rejected base claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6 and 12-20 are rejected under 35 USC 103 as being unpatentable over Multi-sensor System for Driver’s Hand-Gesture Recognition to Molchanov et al. (hereinafter Molchanov) in view of US Pat. Pub. No. 2014/0309839 to Ricci et al. (hereinafter Ricci). Per claim 1, Molchanov discloses A system (Abstract…multi-sensor system for dynamic gesture recognition, “We propose a novel multi-sensor system for accurate and power-efficient dynamic car-driver hand-gesture recognition…”; Section I and fig. 1…”We present a novel multi-sensor system…”) comprising: one or more sensors (Section III.A and fig. 1…”Our system uses a color camera, a time-of-flight (TOF) depth camera, and a short-range radar”) located in an automobile (Section III.B and fig. 1…”Our gesture interface is located in the central console facing the interior of the car within arm’s reach (50cm) of the driver (Fig. 1)”), wherein the one or more sensors are configured to generate sensor data related to an interior of the automobile (Section III.B and fig. 1…sensors gather data from interior of car where the sensors are pointed, ”Our gesture interface is located in the central console facing the interior of the car within arm’s reach (50cm) of the driver (Fig. 1)”), wherein the sensor data comprises images of the interior of the automobile (Section III.A and fig. 1…images of interior of automobile are captured, ”The color camera acquires RGB images (640480) and the depth camera captures range (z) images (320240) of the objects that are closest to it, both at 30 fps”), wherein the images of the interior of the automobile include images of portions of the body of a person (Section III.E…images of driver and hand region are captured, “We first segment the hand region in the depth image by assuming that it is the closest connected component to the depth camera and generate a mask for the hand region”; Section III.B…sensors field of view includes driver’s reach which can include body, ”Our gesture interface is located in the central console facing the interior of the car within arm’s reach (50cm) of the driver (Fig. 1)… Gestures can be performed anywhere roughly within the center of the field of view (FOV) of the interface”; Section II…”Convolutional DNNs have also been employed previously to detect and recognize 20 gestures from the Italian sign language using RGB-D images of hand regions along with upper-body skeletal features [20],”), …, wherein the portions of the body of the person include the arms of the person (Section III.E…hand region can include arm, “We first segment the hand region in the depth image by assuming that it is the closest connected component to the depth camera and generate a mask for the hand region”; Section III.B… sensor field of view includes driver’s reach, which includes arm, ”Our gesture interface is located in the central console facing the interior of the car within arm’s reach (50cm) of the driver (Fig. 1)… Gestures can be performed anywhere roughly within the center of the field of view (FOV) of the interface”; Section II…upper body includes arms, ”Convolutional DNNs have also been employed previously to detect and recognize 20 gestures from the Italian sign language using RGB-D images of hand regions along with upper-body skeletal features [20]”), wherein the portions of the body of the person include the hands of the person (Section III.E…hand region captured, “We first segment the hand region in the depth image by assuming that it is the closest connected component to the depth camera and generate a mask for the hand region”), …; a computing device located in the automobile (Section III.A…a micro controller and computer chip controls the sensors, “In our radar system, we employed a 24GHz front-end Infineon chip and wave guide antennas… We used a Tiva C micro controller (Texas Instruments, Dallas, TX) for controlling the radar chip, sampling the signal, generating the control signal, and for transferring data to the host”; Section III.F…computer with GPU is computing device used to train classifier, “We learned the parameters of the DNN by means of a labelled training data set using the Theano package [31]. Training was performed on a CUDA capable Quadro 6000 NVIDIA GPU”), wherein the computing device is configured to receive neural network configuration parameters (Section III.F…convolutional neural network is used to classify gestures, requiring training where weights are received and tuned, “We train a convolutional deep neural network classifier for recognizing different types of dynamic gestures…There are nearly 7.8 million tunable weights in the network that need to be learnt”; Section III.F…”Training was performed on a CUDA capable Quadro 6000 NVIDIA GPU”), wherein the computing device is configured to receive the sensor data (Section III.E and fig. 6…neural network uses sensor data, e.g., images, as input, “We represent a dynamic gesture by a batch of temporal frames, which is input to the classifier for gesture recognition (Fig. 6)”), wherein the computing device is configured to determine a state of the person based on the sensor data and the neural network configuration parameters (Section I…gesture recognition, e.g., state of a person, ”In summary, our contributions are: (1) a novel multi-sensor gesture recognition system that effectively combines imaging and radar sensors; (2) use of the radar sensor for dynamic gesture segmentation, recognition, and reduced power consumption…”; Section IV.B…neural network determines the gesture type, e.g., state of person, or classification based on the input sensor/image data and learned parameters, “We computed the average Precision, Recall, and F score, and the accuracy of the gesture recognition system... We estimated these values for each of the 11 gesture classes and then averaged them together to produce single values”); and an automobile controller located in the automobile (Section I…gesture recognition can be used to control automobile functionality, e.g., send commands to an automobile controller located inside the automobile, “Visual-manual interfaces, such as haptic controls and touch screens in cars, cause significant distraction. Hand-gesture-based user interfaces (UIs) in cars can lower visual and cognitive distraction, and can improve safety and comfort….gesture interfaces are desirable to consumers…They can be easily customized to individual users’ preferences for gesture types and can be expanded in the future to include functionality for driver monitoring; fig. 1…gesture recognition can be used as commands to the automobile, such as for controlling the infotainment system, i.e., play, next, search, etc.), …, wherein the automobile controller is configured to receive a result of the determination of the state of the person from the computing device (fig. 1…classification of user gesture by neural network, e.g., determining state of the person, used to execute commands such as play, next, search, etc.), … Molchanov does not expressly disclose, but with Ricci does teach: … wherein the portions of the body of the person include the head of the person (Ricci: ¶484…sensors monitor user’s head, “If the user's head deviates from that interior space for some amount of time, the vehicle control system 204 can determine that something is wrong with the driver and change the function or operation of the vehicle 104 to assist the driver”; ¶370…sensors determine dimensions of a user’s face, e.g., head, “The image sensors 622A-B may be used alone or in combination to identify objects, users 216, and/or other features, inside the vehicle 104... the image sensors 622A-B may be used to determine dimensions between various features of a user's face (e.g., the depth/distance from a user's nose to a user's cheeks, a linear distance between the center of a user's eyes, and more). These dimensions may be used to verify, record, and even modify characteristics that serve to identify a user 216”); …wherein the portions of the body of the person include at least a portion of the torso of the person (Ricci: ¶366…determining if a person is leaning forward involves detection of torso of the person, “Safety sensors can measure whether the person is acting safely. Optical sensors can determine a person's position and focus. If the person stops looking at the road ahead, the optical sensor can detect the lack of focus. Sensors in the seats may detect if a person is leaning forward”); …wherein the automobile controller (Ricci: ¶391…sensors output can be used by automobile controller to control systems in the automobile, such as electrical or mechanical components, “As can be appreciated, the restraint devices and/or systems may be associated with one or more sensors that are configured to detect a state of the device/system. The state may include extension, engagement, retraction, disengagement, deployment, and/or other electrical or mechanical conditions associated with the device/system”; fig. 2:204…vehicle control system can be automobile controller; fig. 8C:8104…automobile controller) is configured to control operation of the automobile in a self-driving mode (Ricci: ¶361…sensors used by automobile controller to control driverless systems based on what is sensed by sensors, "A set of sensors or vehicle components 600 associated with the vehicle 104 may be as shown in FIG. 6A. The vehicle 104 can include, among many other components common to vehicles… driverless systems (e.g., cruise control systems, automatic steering systems, automatic braking systems, etc.)"; Abstract…"…control the operation of the vehicles through a section of roadway. The automated control includes the communication of directions and other messages that ensure the proper function of the vehicle while under the guidance of the traffic control system”); …wherein the automobile controller is configured to control operation of the automobile in the self-driving mode based at least in part on the result of the determination of the state of the person (Ricci: ¶373… if sensors detects lack of focus state of a person, the control system can safely take control of the vehicle and bring the vehicle to a stop, “if the seat sensors 677 detect that a user 216 is fidgeting, or moving, in a seemingly uncontrollable manner, the system may determine that the user 216 has suffered a nervous and/or muscular system issue (e.g., seizure, etc.). The vehicle control system 204 may then cause the vehicle 104 to slow down and in addition or alternatively the automobile controller 8104 (described below) can safely take control of the vehicle 104 and bring the vehicle 104 to a stop in a safe location (e.g., out of traffic, off a freeway, etc.)”; ¶366…”Safety sensors can measure whether the person is acting safely. Optical sensors can determine a person's position and focus. If the person stops looking at the road ahead, the optical sensor can detect the lack of focus. Sensors in the seats may detect if a person is leaning forward or may be injured by a seat belt in a collision. Other sensors can detect that the driver has at least one hand on a steering wheel”). Molchanov and Ricci are analogous art because they are from the same field of endeavor in automotive electronics and vehicle occupant monitoring systems. Both references address the problem of monitoring a driver/occupant within a vehicle interior using sensors (cameras, etc.) to determine a state (gesture, focus, safety) and performing a computerized action based on that determination. Molchanov teaches that the interior of a car is a "challenging environment" due to variable lighting and that existing sensors do not work reliably (Molchanov: Section I). Molchanov solves this by using a multi-sensor system fed into a Deep Neural Network to robustly classify states/gestures (Molchanov: fig. 1). Ricci teaches a system that controls "automated vehicles" and monitors the driver's state (e.g., focus, hands on wheel, body position) to ensure safety and engaging safe shutdown or taking control if the driver is unsafe. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the neural network-based sensing system of Molchanov with the vehicle control and safety systems of Ricci. It would be obvious to a person of ordinary skill in the art to employ Molchanov's robust, DNN-based classification method to detect the driver states required by Ricci (e.g., head/gaze direction, hand position, torso position). Doing so would improve the reliability of Ricci's safety features such as the safe shutdown mode by ensuring the system does not falsely trigger due to poor lighting or occlusion, which Molchanov explicitly addresses. The combination involves the simple substitution of Molchanov's advanced sensing/classification back-end into Ricci's vehicle control front-end to yield predictable, improved results in driver monitoring. Per claim 2, Molchanov combined with Ricci discloses claim 1, Ricci further discloses the portions of the body of the person include at least a portion of the legs of the person (Ricci: ¶480…detect/monitor the legs/feet and lower body by sensing position of the driver's legs relative to the pedals, “the settings 1224 may be the position of a seat, the position of a steering wheel, the position of accelerator and/or brake pedals”; ¶422…detecting a person walking or moving about the cabin requires imaging/sensing the legs, “The wearable devices 802, 806, 810 can include…pedometers, movement sensors…”; Fig. 6B… sensor field of view includes covering the full seating area where legs are located). Molchanov and Ricci are analogous art because they are from the same field of endeavor in automotive electronics and vehicle occupant monitoring systems. Both references address the problem of monitoring a driver/occupant within a vehicle interior using sensors (cameras, etc.) to determine a state (gesture, focus, safety) and performing a computerized action based on that determination. A person of ordinary skill in the art would be motivated to combine the robust neural network sensing of Molchanov with the specific body-part monitoring (including legs) of Ricci because extending Molchanov's robust DNN classification to include the legs as taught by Ricci's pedal, movement and position monitoring, would allow the system to accurately determine if a driver is in a safe driving posture or if a passenger is out of position. Adding the monitoring of legs from Ricci to the Molchanov system involves the predictable application of known sensing technology (Ricci's sensors) to solve the known problem of ensuring a driver is safely positioned to operate vehicle. Per claim 3, Molchanov combined with Ricci discloses claim 1, Molchanov further disclosing the state of the person is a current activity of the person (fig. 9…the state is a dynamic gesture i.e., shaking, swiping, rotation, these being current activities of the person, “Gesture types We used 10 different dynamic gestures for training our system: moving left/right/up/down (classes 1-4), swiping left/right (classes 5-6), shaking (class 7), CW/CCW rotation (classes 8-9), and calling (class 10)”). Per claim 4, Molchanov combined with Ricci discloses claim 3, further disclosing the computing device is configured to determine the state of the person based at least in part on the location of the arms of the person as captured in the sensor data (Molchanov: fig. 5…recognizing dynamic hand gestures, extracting features (sparse velocity values) extrapolated across the hand region, necessitating the location tracking of the hands/arms) and based at least in part on the orientation of the head of the person as captured in the sensor data (Ricci: ¶484… determines safety parameters by tracking if the user's head position deviates from a required three-dimensional space, which requires monitoring the spatial location and orientation of the head, “If the user's head deviates from that interior space for some amount of time, the vehicle control system 204 can determine that something is wrong with the driver and change the function or operation of the vehicle 104 to assist the driver. This may happen, for example, when a user falls asleep at the wheel. If the user's head droops and no longer occupies a certain three dimensional space, the vehicle control system 204 can determine that the driver has fallen asleep and may take control of the operation of the vehicle 204 and the automobile controller 8104 may steer the vehicle 204 to the side of the road”). The rationale to combine Molchanov with Ricci for these limitations are the same as previously provided. Per claim 5, Molchanov combined with Ricci discloses claim 4, Molchanov further disclosing the computing device is configured to determine the state of the person in order to determine whether the person is currently using a handheld mobile device (Molchanov: fig. 9…calling gesture, “Gesture types We used 10 different dynamic gestures for training our system: …calling gesture (class 10)”). Per claim 6, Molchanov combined with Ricci discloses claim 5, Ricci further disclosing the computing device is configured to determine the state of the person in order to determine whether the person is currently in a safe driving state (Ricci: ¶484…vehicle control system monitors the user's health/safety data (including head/arm location and deviation from norms) to determine if there is a problem and may react to ensure that the driver is in a safe driving state and able to operate the vehicle, “…the vehicle control system 204 can determine that the driver has fallen asleep and may take control of the operation of the vehicle 204 and the automobile controller 8104 may steer the vehicle 204 to the side of the road. In other examples, if the user's reaction time is too slow or some other safety parameter is not nominal, the vehicle control system 204 may determine that the user is inebriated or having some other medical problem. The vehicle control system 204 may then assume control of the vehicle to ensure that the driver is safe”). The rationale to combine Molchanov with Ricci for these limitations are the same as previously provided. Per claim 12, Molchanov combined with Ricci discloses claim 1, Ricci further disclosing the computing device is configured to determine whether to update the neural network parameters (Ricci: ¶527-538… storing and updating settings associated with a user such those related to gesture, where the computing device determines if settings/parameters are to be updated in memory based on conditions such as elapsed time or user input, ”the vehicle control system 204 may receive health and/or safety data from the vehicle sensors 242, in step 1828. The vehicle control system 204 can determine if the health or safety data is to be stored, in step 1832. The determination is made as to whether or not there is sufficient health data or safety parameters, in portion 1228 and 1236, to provide a reasonable baseline data pattern for the user 1240… The vehicle control system 204 may then wait a period of time, in step 1836. The period of time may be any amount of time from seconds to minutes to days. Thereinafter, the vehicle control system 204 can receive new data from vehicle sensors 242, in step 1828. Thus, the vehicle control system 204 can receive data periodically and update or continue to refine the health data and safety parameters in data structure 1204”; ¶531…”…the vehicle control system 204 may determine if gestures are to be stored and associated with the user, in step 1628. The vehicle control system 204 may receive user input on a touch sensitive display or some other type of gesture capture region which acknowledges that the user wishes to store one or more gestures. Thus, the user may create their own gestures such as those described in conjunction with FIGS. 11A-11K. These gestures may then be characterized and stored in data structure 1204”). Molchanov and Ricci are analogous art because they are from the same field of endeavor in automotive electronics and vehicle occupant monitoring systems. Both references address the problem of monitoring a driver/occupant within a vehicle interior using sensors (cameras, etc.) to determine a state (gesture, focus, safety) and performing a computerized action based on that determination. Molchanov teaches the use of complex machine learning models (DNNs) that require training and optimization (e.g., weights/parameters). Ricci teaches a generic method for the on-board computing device to determine whether configuration settings/data should be stored/updated based on certain conditions (e.g., passing a period of time, user input). It would be obvious to apply Ricci's general update determination framework to Molchanov's specific machine learning parameters (neural network weights/configuration parameters) to allow the system to continuously adapt and improve the accuracy of its state determination function (classification accuracy improved by optimizing the network), especially since Ricci teaches storing settings in cloud storage for portability and robustness (¶518-519…”The vehicle control system 204 may then store the settings for the person, in step 1328. The user interaction subsystem 332 can make a new entry for the user 1208 in data structure 1204…The settings may also be stored in cloud storage, in step 1332. Thus, the vehicle control system 204 can send the new settings to the server 228 to be stored in storage 232. In this way, these new settings may be ported to other vehicles for the user. Further, the settings in storage system 23 2 may be retrieved, if local storage does not include the settings in storage system 208”). Per claim 13, Molchanov combined with Ricci discloses claim 12, Molchanov further disclosing a remote computing device not located in the automobile (Molchanov: Section III.F…training done using a powerful GPU, e.g., Quadro 6000 NVIDIA GPU, intrinsically being offline/remote to a more power limited automobile, “We learned the parameters of the DNN by means of a labelled training data set using the Theano package [31]. Training was performed on a CUDA capable Quadro 6000 NVIDIA GPU”), wherein the remote computing device is configured to generate the neural network parameters (Molchanov: fig. 1…The convolutional DNN uses data fused from three sensors (optical, depth, and radar), “We propose a multi-sensor gesture recognition system that uses optical, depth, and radar sensors. Data from the multiple sensors are input into a deep neural network classifier for recognizing dynamic gestures”; Section IV.B and fig. 10…The kernels (NN parameters) learned by the DNN show that the radar sensor contributed towards the final decision made by the network, confirming the parameters were generated based on the second sensor data, “The kernels learned by the first convolutional layer of the DRO network are illustrated in Fig. 10. Assuming that x and y are spatial dimensions, and t is the temporal dimension, projections of the learnt convolutional kernels on to the yt, xt, and xy planes are depicted. Observe that all three sensors contributed towards the final decision made by the network”) based on second sensor data generated by the one or more sensors (Molchanov: Section III.A and fig. 2…multi-sensor system includes a short-range radar as an additional sensor located inside the car, generating range and velocity data, “Our system uses a color camera, a time-of-flight (TOF) depth camera, and a short-range radar... Off-the-shelf short-range radar systems in the permitted frequency bands that are appropriate for use inside a car are not widely available. Therefore, we built a prototype radar system, with an operational range of <1m (Fig. 2)…”), wherein the remote computing device is configured to, in response to the computing device determining to update the neural network parameters, generate second neural network parameters based on third sensor data generated by the one or more sensors (Molchanov: Section III.F…Molchanov teaches that the generalization capability of the DNN is improved by augmenting the training dataset with transformed versions of the training samples, e.g., third sensor data, and expanding the study to a larger data set of gestures of more subjects to improve the generalization of the DNN, indicating additional training (generation of new parameters, e.g., second neural network parameters, should occur when the system needs improvement (i.e., when parameters need updating, determined implicitly or explicitly by the on-board computer), “We found that a number of procedures helped to increase the accuracy of the system. Weight decay and dropout prevented the network from over-fitting to the training data and improved the classification accuracy by 2.3% on average. Augmenting the training dataset with transformed versions of the training samples also helped to improve the generalization capability of the DNN. We applied the same transformation to all the three sensor channels of each gesture…”). Per claim 14, Molchanov combined with Ricci discloses claim 13, Molchanov further disclosing the computing device is configured to determine the state of the person in the automobile based on the second neural network configuration parameters and based on fourth sensor data generated by the one or more sensors (Molchanov: Section III.F…Molchanov teaches that the generalization capability of the DNN is improved by augmenting the training dataset with transformed versions of each of the three sensors, thus effectively generating three additional sensor data, e.g., including a fourth sensor data, and expanding the study to a larger data set of gestures of more subjects to improve the generalization of the DNN, indicating additional training (generation of new parameters, e.g., second neural network parameters) is performed to improve classification accuracy of gestures, “We found that a number of procedures helped to increase the accuracy of the system…Augmenting the training dataset with transformed versions of the training samples also helped to improve the generalization capability of the DNN. We applied the same transformation to all the three sensor channels of each gesture…”). Claims 15, 16, 17, 18, 19 and 20 are substantially similar in scope and spirit to claims 1, 4, 12, 13, 14 and 5, respectively. Therefore, the rejections of claims 1, 4, 12, 13, 14 and 5 are applied accordingly. Allowable Subject Matter Claims 7-11 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112 set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is the statement of reasons for the indication of allowable subject matter: The prior art disclosed by the applicant and cited by the Examiner fail to teach or suggest, alone or in combination, all the limitations of the independent claim 1, further including the particular notable limitations of: one or more second sensors configured to generate second sensor data related to an interior of a second automobile, wherein the second sensor data comprises images of the interior of the second automobile, wherein the neural network configuration parameters are generated based on the second sensor data. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Patents and/or related publications are cited in the Notice of References Cited (Form PTO-892) attached to this action to further show the state of the art with respect to controlling self-driving modes based on neural network analysis of driver body positions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN CHEN whose telephone number is (571)272-4143. The examiner can normally be reached M-F 10-7. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN CHEN/Primary Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
Dec 12, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596942
BLACK-BOX EXPLAINER FOR TIME SERIES FORECASTING
2y 5m to grant Granted Apr 07, 2026
Patent 12596084
MACHINE LEARNING FOR HIGH-ENERGY INTERACTIONS ANALYSIS
2y 5m to grant Granted Apr 07, 2026
Patent 12596929
INTEGRATED CIRCUIT WITH DYNAMIC FUSING OF NEURAL NETWORK BRANCH STRUCTURES BY TOPOLOGICAL SEQUENCING
2y 5m to grant Granted Apr 07, 2026
Patent 12591777
PARSIMONIOUS INFERENCE ON CONVOLUTIONAL NEURAL NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12585930
NPU FOR GENERATING FEATURE MAP BASED ON COEFFICIENTS AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
97%
With Interview (+6.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 1126 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month