Prosecution Insights
Last updated: April 19, 2026
Application No. 18/122,286

SYSTEMS AND METHODS FOR IN-CABIN MONITORING WITH LIVELINESS DETECTION

Non-Final OA §103
Filed
Mar 16, 2023
Examiner
VELASQUEZ VANEGAS, RAFAEL
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ford Global Technologies LLC
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-2.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
37 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-26 are pending Claims 1,11, 12, 15, 16, and 20 are amended. Response to Arguments Claims 1 and 12 Applicant’s arguments with respect to claims 1 and 12 have been considered but are moot in view of the new ground(s) of rejection as necessitated by applicant's amendments. Claim 16 Applicant’s arguments with respect to claim 16 has been considered but are moot in view of the new ground(s) of rejection as necessitated by applicant's amendments. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 3-10, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over GLAZMAN (US20210179117A1) in view of DIMITROV (US 20230071443 A1) in further view of BOTONJIC (US 20200294266 A1). Regarding claim 1: GLAZMAN discloses: A method for monitoring an object in a compartment of a vehicle, the method comprising: (see at least GLAZMAN, ¶ 0011, "The improvement may be obtained at least based on the point cloud sensor described herein that captures depth data of the cabin, optionally the entire cabin, including multiple occupants located at any possible seating combination available in the cabin”; ¶ 0123, The pattern mapped to the imaging sensor is processed by processor(s) 106 to compute pattern(s) relating to distortion/shift (due to depth) in order to provide depth, contour and/or movement information relating to objects (e.g. vehicle occupants) positioned in the vehicle cabin.) representing the compartment of the vehicle, each of the plurality of point clouds including three-dimensional positional information of the compartment; (see at least GLAZMAN, ¶ 0011) a shape of the object based on the compared point cloud; (see at least GLAZMAN, ¶ 0269, "At 602, the point cloud is analyzed to estimate the height and/or weight of the driver and/or passengers. It is noted that each car seat may be classified as being occupied by a passenger, or empty, or including one or more inanimate objects. The analysis of the seat may be performed, for example, by an analysis of a correlation between the current point cloud and a point cloud of an empty passenger compartment. Variations in the current point cloud relative to the point cloud of the empty passenger compartment at locations corresponding to seats is indicative of the presence of a passenger or inanimate object. Inanimate objects may be distinguished for human and/or pet passengers based on detected motion, as described herein.) classifying the object as an occupant based on the shape (see at least GLAZMAN, ¶ 0269, "At 602, the point cloud is analyzed to estimate the height and/or weight of the driver and/or passengers. It is noted that each car seat may be classified as being occupied by a passenger, or empty, or including one or more inanimate objects. The analysis of the seat may be performed, for example, by an analysis of a correlation between the current point cloud and a point cloud of an empty passenger compartment. Variations in the current point cloud relative to the point cloud of the empty passenger compartment at locations corresponding to seats is indicative of the presence of a passenger or inanimate object. Inanimate objects may be distinguished for human and/or pet passengers based on detected motion, as described herein.) identifying a body segment of the occupant; (see at least GLAZMAN, ¶ 0051, "In a further implementation form of the second aspect, the classifier includes code for at least one of: (i) analyzing the depth data to estimate volume and dimensions including height of each at least one occupant, computing body structure of each at least one occupant according to the computed estimate of volume and dimensions, and computing mass of each at least one occupant according to the computed body structure, (ii) computing age and/or gender according to the computed body structure, and (iii) identifying relative locations of at least one body part according to the depth data and computing a body post classification category according to the identified relative locations of at least one body part.") comparing the body segment to target keypoints corresponding to a target attribute for the body segment (see at least GLAZMAN, ¶ 0376, "Relative locations of one or more body parts (e.g., hand, leg, torso, head) may be computed according to the depth data. A body posture classification category may be computed according to the identified relative locations of the body part(s).") determining a condition of the occupant based on the comparison of the body segment to the target keypoints; and (see at least GLAZMAN, ¶ 0172, "At 304, the point cloud is analyzed to identify the identity of the driver and/or passenger(s), optionally according to a user profile. An indication of the identity of each occupant may be computed, and matched to a respective user profile. The user profile may be stored in a profile database, for example, within a user profile data repository 120A stored in data repository 120 of computing device 108."; ¶ 0232, "At 504, one or more point clouds, optionally a sequence of point clouds, are analyzed to identify posture and/or gesture and/or behavior patterns of the driver and/or passengers. The point cloud may be classified into one of a set of predefined posture and/or gesture, for example, by one or more trained classifiers. The trained classifiers 120C may be locally stored by data repository storage device 120 of computing device 108."; ¶ 0245-¶ 0254, "Exemplary malicious behavior includes: [0246] Quick and/or sharp limb movements (e.g., swearing at other drivers and/or passengers). [0247] Contact with other passengers, optionally repeated contract (e.g., hitting other passengers, sexually inappropriate behavior). [0248] Abnormal gestures of limbs and/or body and/or head (e.g., seizure, heart attack, onset of psychiatric illness). [0249] Lack of limb and/or head gestures and/or body movement when limb and/or head gestures and/or body movement is expected, indicative of driver fatigue and/or distraction. [0250] Driver driving while holding a phone to their ear. [0251] Driver turning around during driving to look at the back seat. [0252] Driver looking at a direction other than the front of the vehicle above a predefined threshold (e.g., 2-3 seconds), for example, reading a message on a smartphone, looking at a newspaper located on the passenger seat. [0253] Driver not holding steering wheel. [0254] Driver holding steering wheel with one hand.") generating an output based on the determined condition. (see at least GLAZMAN, ¶ 0099, "At least some of the systems, apparatus, methods, and/or code instructions (stored in a data storage device executable by one or more hardware processors) described herein address the technical problem of improving safety of passengers in a vehicle. Safety is improved based on an analysis of the point cloud described herein to determine the location of passenger(s) within the vehicle compartment, and the weight and/or height of the passengers and/or behavior of the driver and/or passengers. For example, the activation of the airbag(s), seat belt(s), and/or automatic braking, are controlled to provide maximum safety with minimal injury risk to the passengers according to the location of the passengers and/or weight and/or height of the passengers. In another example, the point cloud, or a sequence of point clouds, are analyzed to identify malicious behavior of the driver and/or passengers which may lead to an increase risk of an accident. Safety measures may be activated to mitigate risk due to the malicious behavior, for example, a message to stop the malicious behavior, and/or automatic stopping of the vehicle."; ¶ 0187, "At 312, instructions are created by computing device 108 for transmission to one or more vehicle sub-systems 112A (e.g., ECU) over the vehicle network 112B (e.g., CAN-bus) for automatically adjusting one or more vehicle features according to the user profile. Each user profile may be associated with customized vehicle parameters, and/or general vehicle parameters, for example, stored as metadata and/or values of predefined fields stored in a database associated with the user profiles and/or stored in a record of the user profile.") GLAZMAN does not disclose, but DIMITROV teaches: positioning a plurality of time-of-flight sensors to monitor the compartment, wherein the plurality of time-of-flight sensors are in communication with each other; (see at least DIMITROV, ¶ 0010, “To overcome the problem introduced by the grille pattern and to improve the operation of the sensor system, two or more LiDAR sensors may be optimally placed at a certain distance apart such that when the point cloud data for each sensor are merged together, the interference pattern of the grille is partially or totally removed. In other words, one LiDAR sensor may “see” regions of the scene that are blocked on the other sensor and vice versa. The perception processing software may then combine data from the multiple point clouds to achieve a relatively unobstructed data set.”; ¶ 0011, “A method for imaging a scene in front of a vehicle may include: receiving, via a processing device and a memory, a first point cloud from a first LiDAR sensor mounted at a first location behind a vehicle grille, the first point cloud representing the scene in front of the vehicle, wherein as a result of the grille the scene represented by the first point cloud data may be partially occluded with a first pattern of occlusion receiving, via the processing device and the memory, a second point cloud from a second LiDAR sensor mounted at a second location behind the vehicle grille combining the first and second point clouds to generate a composite point cloud data set, wherein the first location of the first LiDAR sensor may be located relative to the second location of the second LiDAR sensor such that when a point cloud data for the first optical sensor and the second optical sensor are combined, the first pattern of occlusion may be at least partially compensated; and processing the combined point cloud data set.”; ¶ 0020, “In some embodiments, the first pattern of occlusion may be at least partially compensated when occlusions of the first pattern of occlusion are reduced in the combined point cloud data sets. In various embodiments, the first pattern of occlusion may be at least partially compensated when occlusions of the first pattern of occlusion are eliminated in the combined point cloud data sets.”) generating, via each of the plurality of time-of-flight sensors, a plurality of point clouds (see at least DIMITROV, ¶ 0011) comparing the plurality of point clouds generated by each of the plurality of time-of-flight sensors to one another to provide a compared point cloud; (see at least DIMITROV, ¶ 0010; ¶ 0011; ¶ 0034, “To overcome the problem introduced by the grille pattern and to improve operation of the processing system, two or more LiDAR sensors are optimally placed at a certain distance apart such that when the point cloud data for each sensor are merged together, the interference pattern of the grille is partially or totally removed. In other words, one LiDAR sensor may “see” regions of the scene that are blocked on the other sensor and vice versa. The image processing software may then combine data from the multiple point clouds to achieve a relatively unobstructed data set. This may represent an improvement over conventional technologies in that data can be combined and utilized with blind spots being reduced or eliminated.”) determining, via processing circuitry in communication to the plurality of time-of-flight sensors, (see at least DIMITROV, ¶ 0010; ¶ 0011; ¶ 0057, “Timing information can be used to measure the time-of-flight of the optical signal from its source at LiDAR system 212 the object off of which it bounces and back to the photodetector where its reflection is received. This time-of-flight can be used to measure the distance from the vehicle (from LiDAR system 212) to the object. A 3D LiDAR system, therefore, can capture two-dimensional data using photodetectors arranged in rows and columns and the third dimension, distance, determined based on the time-of-flight. LiDAR system 212 can be implemented using any of a number of different LiDAR technologies including electromechanical LiDAR and solid-state LiDAR. LiDAR system 212 can be implemented and configured to provide the system with 150°-180° of visibility toward the front of the subject vehicle, although other fields of view can be provided. LiDAR system 212 can be implemented with a relatively high degree of accuracy (e.g., on the order of +/−2 cms).”; ¶ 0060, “Cameras 214 may also be included to capture image data of the environment surrounding the vehicle. For object detecting and tracking solutions during forward vehicle motion, cameras 214 may be implemented as a forward facing cameras to capture image information of the environment in front of the vehicle. Cameras can be configured with image sensors to operate in any of a number of different spectra including, for example, the visible spectrum and the infrared spectrum. Data from cameras 214 may be used, for example, to detect objects. The system can be configured to utilize multisensor fusion to combine information from multiple sensors such as, for example, LiDAR sensors 212 and cameras 214.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the depth sensor system for monitoring car cabin within GLAZMAN to use multiple time-of-flight sensors such as LIDAR for data collection for processing to then be combined together for eliminating occlusions within DIMITROV to yield an effective car interior monitoring system that can robustly visualize around obstructions within the vehicle. EXAMINERS NOTE: GLAZMAN anticipates the use of a depth sensor to perform the functionality of mapping the vehicle’s cabin though point cloud generation (GLAZMAN, ¶ 0029, “In a further implementation form of the first aspect, the depth sensor outputting the depth map comprises a point cloud sensor that outputs a point cloud.”). However, the technology from GLAZMAN achieves it through the projection of infrared dots on the subject and their distortion collected by a camera sensor (GLAZMAN, ¶ 0154, “The imaging sensor can be a CMOS/CCD sensor with a pixel size of 1-10 μm and an HD/FHD/VGA resolution. Code when executed identifies the captured image spots, measures the shift (depth) of each spot from a reference (projected) location, reconstructs the depth by calculation and/or comparing shift length to a look-up table and/or comparing spot depth with adjacent spots to increase depth certainty.”). DIMITROV performs a point cloud collection based on depth sensors as disclosed by GLAZMAN, except the depth information is gathered with a time-of-flight sensor such as lidar. GLAZMAN in view of DIMITROV does not disclose, but BOTONJIC teaches: classifying the object as an occupant based on the shape by using an object classification unit that includes (see at least BOTONJIC, ¶ 0005, “A computing system may be configured to receive point cloud data from a LiDAR sensor or other similar sensor. The computing system may be further configured to convert the point cloud data into a structured data format, such as a frame of voxels (volume pixels). The computing system may then process the voxelized frame using a deep neural network. The deep neural network may be configured with a model that determines whether or not a person is present. The deep neural network also may perform a regression to estimate a pose for each of the one or more persons that are detected. In some examples, the computing system makes the determination of a person and the pose estimation serially. That is, in some examples, first the computing system detects a person with the deep neural network and then the computing system estimates the pose of the person using the deep neural network. In other examples, the computing system performs the determination of a person and the pose estimation in parallel. That is, in some examples, the computing system determines the presence of a person and the person's corresponding pose for each voxel at the same time. If the deep neural network determines that a person is not present in the voxel, the computing system discards the estimated pose.”; ¶ 0050, “Deep neural network 44 is configured to analyze the voxelized frame and produce two outputs for each of the voxels. One output may be called a classification. The classification indicates whether or not a person is present in the voxel being analyzed. The other output may be called a pose estimation that is produced from a regression. The regression determines the pose of the person (or a key point of a person) if such a person is present in the voxel. As will be explained in more detail below, deep neural network 44 may be configured to perform the classification and regression techniques in serial or in parallel.”) one or more neural networks that are in communication with (see at least BOTONJIC, ¶ 0007, “In another example, this disclosure describes techniques for annotating point cloud data. In order to train a deep neural network to estimate a pose of a person in point cloud data, the deep neural network may be configured and modified through processing of a training set of point cloud data. The training set of point cloud data is previously-labeled with the exact location and poses of persons within the point cloud (e.g., through manual labeling). This previous labeling of poses in the point cloud data may be referred to as annotation. Techniques for annotating human pose in two-dimensional images exist. However, annotating point cloud data is considerably different. For one, point cloud data is three-dimensional. Furthermore, point cloud data is sparse in relation to two-dimensional image data.”; ¶ 0008, “This disclosure describes a method, apparatus, and software for annotating point cloud data. A user may use the techniques of this disclosure to annotate point clouds to label one or more poses found in the point cloud data. The annotated point cloud data may then be used to train a neural network to more accurately identify and label poses in point cloud data in real-time.”) a body pose database, (see at least BOTONJIC, ¶ 0007) a skeleton model database, and (see at least BOTONJIC, ¶ 0053, “During processing by deep neural network 44, an anchor skeleton is activated (i.e., classified as positive for the presence of a person) if the overlapping area between a bounding box of the anchor skeleton and that of any ground truth skeleton (i.e., the data present in the voxel) satisfies a threshold condition. For example, if the overlapping area of the bounding box of the anchor skeleton and the voxel is above a certain threshold (e.g., 0.5), the anchor skeleton is activated for that voxel and the presence of a person is detected. The threshold may be a measurement of the amount of overlap (e.g., the intersection-over-union (IOU). Deep neural network 44 may make the classification based on comparison to one or more multiple different anchor skeletons. Deep neural network 44 may also be performed to perform a regression that encodes the difference between an anchor skeleton and the ground truth skeleton (i.e., the data in the actual voxel). Deep neural network 44 may be configured to encode this difference for each of a plurality of key points defined for the anchor skeleton. The difference between the key points of the anchor skeleton and the data in the voxel is indicative of the actual pose of the person detected during classification. Deep neural network may then be configured to provide the classification (e.g., a location of the determined one or more persons) and the pose for each of the determined one or more persons to post-processing unit 46. When multiple persons are detected from the point cloud, multiple anchor skeletons will be activated, thus achieving multi-person pose estimation.”; ¶ 0111, “When a user selects the load skel(s), annotation tool 242 opens a file explorer dialog box and loads any previously-annotated skeletons from a user selected file. The user can then edit any previously-annotated skeletons.”) a computer associated with the processing circuitry, (see at least BOTONJIC, ¶ 0061, “FIG. 6 is a conceptual diagram showing an example skeleton. Skeleton 100 may represent either a predefined anchor skeleton or the pose of a ground truth skeleton estimated using the techniques of the disclosure described above. In one example of the disclosure, skeleton 100 may be defined by a plurality of key points and/or joints. In the example of FIG. 6, skeleton 100 comprises 14 key points. As shown in FIG. 6, skeleton 100 is defined by head key point 102, neck key point 104, left shoulder key point 108, right shoulder key point 106, left elbow 112, right elbow key point 110, left hand key point 116, right hand key point 114, left waist key point 120, right waist key point 118, left knee key point 124, right knee key point 122, left foot key point 128, and right foot key point 126. To determine a pose, microprocessor 22 (see FIG. 2) may be configured to determine a location (e.g., a location in 3D space) of each of the key points of skeleton 100. That is, the locations of each of the key points of skeleton 100 relative to each other define the pose of the skeleton, and thus the pose of the person detected from the point cloud.”; ¶ 0066, “Microprocessor 22 may be further configured to process the voxelized frame using one or more 3D convolutional layers of a deep neural network (804), and to process the voxelized frame using one or more 2D convolutional layers of the deep neural network (806). Microprocessor 22 processes the voxelized frame using the 3D and 2D convolutional layers to determine one or more persons relative to the LiDAR sensor and a pose for each of the one or more persons. Microprocessor 22 may then output a location of the determined one or more persons and the pose for each of the determined one or more persons (808).”) wherein the skeleton model database comprises a plurality of keypoints corresponding to various joints of a skeleton, and (see at least BOTONJIC, ¶ 0052, “Deep neural network 44 may be configured to produce a classification and regression results for each anchor position. In one example, deep neural network may be configured to consider the center of a voxel as an anchor position. For each anchor position, deep neural network 44 may be configured to compare the data stored in the voxel to one or more predefined anchor skeletons (also called a standard or canonical skeleton). The anchor skeleton may be defined by a plurality of key points. In one example, anchor skeletons are defined by fourteen joints and/or key points: head, neck, left shoulder, right shoulder, left elbow, right elbow, left hand, right hand, left waist, right waist, left knee, right knee, left foot, and right foot. In general, a key point may correspond to a feature or structure of the human anatomy (e.g., a point on the human body).”; ¶ 0061) wherein the body pose database and the skeleton model database cooperate to provide one or more target point clouds comprising target keypoint information that corresponds to target body pose data; (see at least BOTONJIC, ¶ 0058, “FIG. 3 is a block diagram illustrating a process flow of one example of the disclosure. As shown in FIG. 3, LiDAR sensor 10 may be configured to capture a point cloud 30 that is the raw input to LiDAR-based pose estimation module 40. LiDAR-based pose estimation module 4 processes point cloud 30 with pre-processing unit 42 (voxelization) to produce a voxelized frame. Deep neural network 44 then processes the voxelized frame to produce classifications of one or more persons (e.g., the location of one or more persons) as well as the pose or poses for the classified one or more persons. The pose for a person is defined by the locations of a plurality of key points of a skeleton. The output of deep neural network 44 are preliminary 3D poses. Post-processing unit 46 processes the preliminary 3D poses with a non-maximum suppression algorithm to produce the output 3D poses.”) by having the object classification unit compare the compared point cloud to the one or more target point clouds to estimate a pose of the occupant; (see at least BOTONJIC, ¶ 0031, “This disclosure describes techniques for performing pose estimation using point cloud data, such as point cloud data produced by a LiDAR sensor. The point cloud output from a LiDAR sensor provides a 3D map of objects in the vicinity of the sensor. As such, depth information is available. In addition, as opposed to a camera sensor, a LiDAR sensor may generate the point clouds in a dark environment. The techniques of this disclosure include processing the point cloud from a LiDAR sensor using a deep neural network to detect the presence of persons near the sensor and to estimate the pose of such persons in order to make autonomous driving decisions.”; ¶ 0054, “Post-processing unit 46 may be configured to turn the output of deep neural network 44 into final output. For example, post-processing unit 46 may be configured to perform non-maximum suppression on the classified and estimated poses produced by deep neural network 44 and produce a final location and pose of the persons detected. Non-maximum suppression is an edge thinning technique. In some cases, deep neural network 44 will classify persons and estimate poses for many closely spaced groups of voxels where only one person actually exists. That is, in some circumstances, deep neural network will detect overlapping duplicates of the same person. Post-processing unit 46 may use non-maximum suppression techniques to remove duplicate skeletons. Post-processing unit 46 outputs the pose and location of the detected persons data 32. Pose and location of the detected persons data 32 may include the location of a person detected by LiDAR-based pose estimation module 40 (e.g., in terms of GPS coordinates) as well as a pose of a skeleton defining the person (e.g., the location of the key points). The pose and location of the detected persons data 32 may be stored in memory 24, sent to autonomous driving application 52, other applications 54, camera-based pose estimation application 56, or transmitted from computing system 14 to another computing system.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the occupant detection system with multiple LiDAR’s to render a point cloud of a car's interior within GLAZMAN in view of DIMITROV to implement a neural-network based deep learning human classification and pose-estimation system trained on human posture and anchor skeletons as within BOTONJIC to yield a more effective automated neural network based vehicle occupant classifier for improved pose detection. Regarding claim 3: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 1 and GLAZMAN further discloses: determining whether the occupant has limited movement of the body segment. (see at least GLAZMAN, ¶ 0221, "In another example, localized small movements without large displacement may be indicative of a small child that is moving his/her head, arms, and/or legs, while the body remains fixed due to being strapped in a car seat."; ¶ 0222, "The set of rules may be indicative of a driver and/or passenger that fell asleep in the passenger compartment. For example, micro movements indicative of breathing are detected for the passengers remaining in the car, without large scale limb and/or displacement motion.") Regarding claim 4: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 3 and GLAZMAN further discloses: comparing the plurality of instances; and (see at least GLAZMAN, ¶ 0232, "At 504, one or more point clouds, optionally a sequence of point clouds, are analyzed to identify posture and/or gesture and/or behavior patterns of the driver and/or passengers. The point cloud may be classified into one of a set of predefined posture and/or gesture, for example, by one or more trained classifiers. The trained classifiers 120C may be locally stored by data repository storage device 120 of computing device 108."; ¶ 0376, "Relative locations of one or more body parts (e.g., hand, leg, torso, head) may be computed according to the depth data. A body posture classification category may be computed according to the identified relative locations of the body part(s).") determining six degrees of freedom of a movement of the body segment based on the point cloud. (see at least GLAZMAN, ¶ 0009, "At least some of the systems, apparatus, methods, and/or code instructions (stored in a data storage device executable by one or more hardware processors) described herein address the technical problem of increasing accuracy of adjustment of vehicle sub-systems for occupants of the vehicle. Other occupant tracking systems, for example that rely only on images, may provide only a limited amount of location data, resulting in relatively less accuracy defining the location of the head of the occupant. In contrast, at least some of the systems, apparatus, methods, and/or code instructions described herein compute a full pose computation and/or a full six degrees of freedom for occupants, optionally for each occupant. It is noted that since humans are not rigid objects, motion may be complex. When the 6 DOF are inadequate for representing the complex motion, the full pose computation may be computed. The 6 degrees of freedom and/or full pose computation enable relatively higher accuracy in adjustment of the vehicle sub-systems. As used herein, the term 6 DOF may sometimes be substituted with the term full pose computation.") GLAZMAN does not disclose, but DIMITROV teaches: Capturing the point cloud of each of the plurality of time-of-flight sensors at a plurality of instances; (see at least DIMITROV, ¶ 0011, “A method for imaging a scene in front of a vehicle may include: receiving, via a processing device and a memory, a first point cloud from a first LiDAR sensor mounted at a first location behind a vehicle grille, the first point cloud representing the scene in front of the vehicle, wherein as a result of the grille the scene represented by the first point cloud data may be partially occluded with a first pattern of occlusion receiving, via the processing device and the memory, a second point cloud from a second LiDAR sensor mounted at a second location behind the vehicle grille combining the first and second point clouds to generate a composite point cloud data set, wherein the first location of the first LiDAR sensor may be located relative to the second location of the second LiDAR sensor such that when a point cloud data for the first optical sensor and the second optical sensor are combined, the first pattern of occlusion may be at least partially compensated; and processing the combined point cloud data set.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the depth sensor system for monitoring car cabin within GLAZMAN to use multiple time-of-flight sensors such as LIDAR for data collection to then be combined together for eliminating occlusions within DIMITROV to yield an effective car interior monitoring system that can robustly visualize around obstructions within the vehicle. Regarding claim 5: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 4 and GLAZMAN further discloses: determining a restriction of at least one of the six degrees of freedom based on the comparison of the plurality of instances. (see at least GLAZMAN, ¶ 0097, "The same and/or single point cloud computed from the output of the point cloud sensor may be analyzed to compute the six degree of freedom for all multiple occupants in the cabin. In contrast to other methods that require a dedicated head tracking system to track each occupant (e.g., each occupant requires their own dedicated head tracking system that tracks only their respective head), at least some of the systems, apparatus, methods, and/or code instructions described herein use the same (e.g., single) point cloud computed from the same (e.g., single point cloud sensor) to compute (e.g., in parallel and/or simultaneously) six degrees of freedom for each of multiple occupants."; ¶ 0221, "In another example, localized small movements without large displacement may be indicative of a small child that is moving his/her head, arms, and/or legs, while the body remains fixed due to being strapped in a car seat."; ¶ 0222, "The set of rules may be indicative of a driver and/or passenger that fell asleep in the passenger compartment. For example, micro movements indicative of breathing are detected for the passengers remaining in the car, without large scale limb and/or displacement motion.") EXAMINERS NOTE: GLAZMAN suggests their claimed invention is detecting persons with restricted degrees of freedom such as a child based on limited movement abilities and adults with movement limited by consciousness (such as sleeping). Regarding claim 6: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 4 and GLAZMAN further discloses: communicating, via the processing circuitry to a window control system of the vehicle, a signal to adjust a window to open or close the window based on detection of the condition. (see at least GLAZMAN, ¶ 0015, "Sub-systems of the vehicle may be adjusted for each passenger and/or the driver according to the corresponding personal profile and/or the height and/or weight and/or posture and/or gesture, for example, the height of the head rest, the angle of the seat, the radio station may be selected, the air temperature may be set by the air conditioner, the state of the window may be set, and/or content presented by an infotainment system installed in the vehicle may be selected accordingly."; ¶ 0223, "At 416, when a passenger (determined to be a baby or small child based on an analysis of weight and/or height and/or according to the identified identity as described herein) is determined, based on an analysis of the point cloud, to be present alone in the parked vehicle (e.g., forgotten, fallen asleep), computing device 108 may generate one or more message in an attempt to save the passenger from an undesired situation, for example, to prevent the baby and/or passenger from heating up in a hot car, and/or being hit by other cars.") EXAMINERS NOTE: GLAZMAN suggests that their claimed invention possesses the ability to alter vehicle systems such as the state of the window based on user profiles. Likewise, GLAZMAN anticipates the need to protect people from dying in locked cars based on limited mobility (such as a child or person asleep) by raising the AC or sounding the alarm. Given the potential to operate the windows unit based on passenger profiles, it could be deduced that automatically lowering the window could be a means to cool the interior. Regarding claim 7: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 1 and GLAZMAN further discloses: presenting, at a user interface in communication with the processing circuitry, an option to the occupant to select the condition. (see at lease GLAZMAN, ¶ 0140, "User interface(s) 122 may be integrated with a display installed in the vehicle, and/or be implemented as a separate device for example, as the user interface of the mobile device of the user. User interface(s) 122 may be implemented as, for example, a touchscreen, a keyboard, a mouse, and voice activated software using speakers and microphone."; ¶ 0303, "The image is analyzed to detect a two dimensional (2D) location of a head of occupant(s), optionally the head of each of the occupants. Alternatively, some heads are detected and other heads are ignored, for example, based on user preference which may be obtained via manual user input using a graphical user interface, and/or based on stored user profiles and/or based on a predefined set of rules for selection of occupants to detect.") Regarding claim 8: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 7 and GLAZMAN further discloses: adjusting the target keypoints based on the option selected. (see at least GLAZMAN, ¶ 0015, "At least some of the systems, apparatus, methods, and/or code instructions (stored in a data storage device executable by one or more hardware processors) described herein provide a unique user experience to the driver and/or passengers of the vehicle. For example, the identity of the driver and/or passengers are automatically determined based on an analysis of the point cloud(s). A personal profile may be retrieved based on the identified identity. Alternatively, the height and/or weight and/or posture and/or gesture of the driver and/or passengers are computed based on an analysis of the point cloud. Sub-systems of the vehicle may be adjusted for each passenger and/or the driver according to the corresponding personal profile and/or the height and/or weight and/or posture and/or gesture, for example, the height of the head rest, the angle of the seat, the radio station may be selected, the air temperature may be set by the air conditioner, the state of the window may be set, and/or content presented by an infotainment system installed in the vehicle may be selected accordingly."; ¶ 0303, "The image is analyzed to detect a two dimensional (2D) location of a head of occupant(s), optionally the head of each of the occupants. Alternatively, some heads are detected and other heads are ignored, for example, based on user preference which may be obtained via manual user input using a graphical user interface, and/or based on stored user profiles and/or based on a predefined set of rules for selection of occupants to detect.") Regarding claim 9: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 1 and GLAZMAN further discloses: classifying, by the processing circuitry, the occupant as a human child, a human adult, or an animal based on the shape. (see at least GLAZMAN, ¶ 0269. "At 602, the point cloud is analyzed to estimate the height and/or weight of the driver and/or passengers. It is noted that each car seat may be classified as being occupied by a passenger, or empty, or including one or more inanimate objects. The analysis of the seat may be performed, for example, by an analysis of a correlation between the current point cloud and a point cloud of an empty passenger compartment. Variations in the current point cloud relative to the point cloud of the empty passenger compartment at locations corresponding to seats is indicative of the presence of a passenger or inanimate object. Inanimate objects may be distinguished for human and/or pet passengers based on detected motion, as described herein."; ¶ 0223, "At 416, when a passenger (determined to be a baby or small child based on an analysis of weight and/or height and/or according to the identified identity as described herein) is determined, based on an analysis of the point cloud, to be present alone in the parked vehicle (e.g., forgotten, fallen asleep), computing device 108 may generate one or more message in an attempt to save the passenger from an undesired situation, for example, to prevent the baby and/or passenger from heating up in a hot car, and/or being hit by other cars.") Regarding claim 10: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 1 and GLAZMAN further discloses: determining a pose of the occupant based on the three-dimensional positional information; (see at least GLAZMAN, ¶ 0069, "In a further implementation form of the third aspect, the system further comprises code for analyzing the point cloud to compute posture and/or gesture and/or behavior of the at least one occupant, computing an indication of malicious behavior by a trained classifier provided with an input of an indication of the posture and/or gesture and/or behavior of the at least one occupant, and wherein the instructions are generated according to the indication of malicious behavior.") comparing, via the processing circuitry, the pose to body pose data stored in a database in communication with the processing circuitry; and (see at least GLAZMAN, ¶ 0240, "The posture and/or gesture of the driver and/or passengers is automatically determined, optionally continuously. Posture and/or gesture may be computed at predefined time frames, for example, every 1 second, or 5 seconds, or 10 seconds, or 15 seconds. Posture and/or gesture may be computed based on a trigger, for example, significant displacement of a limb and/or body according to a requirement.") determining an unfocused state of the occupant based on the comparison of the pose to the body pose data. (see at least GLAZMAN, ¶ 0244, "The malicious behavior may be identified, for example, based on an analysis of the location and/or direction of the head and/or hands of the driver, for example, to determine whether the driver is paying attention to the driver or distracted from driving."; ¶ 0245-¶ 0254, "Exemplary malicious behavior includes: Quick and/or sharp limb movements (e.g., swearing at other drivers and/or passengers). Contact with other passengers, optionally repeated contract (e.g., hitting other passengers, sexually inappropriate behavior). Abnormal gestures of limbs and/or body and/or head (e.g., seizure, heart attack, onset of psychiatric illness). Lack of limb and/or head gestures and/or body movement when limb and/or head gestures and/or body movement is expected, indicative of driver fatigue and/or distraction. Driver driving while holding a phone to their ear. Driver turning around during driving to look at the back seat. Driver looking at a direction other than the front of the vehicle above a predefined threshold (e.g., 2-3 seconds), for example, reading a message on a smartphone, looking at a newspaper located on the passenger seat. Driver not holding steering wheel. Driver holding steering wheel with one hand.") Regarding claim 12: With regards to claim 12, this claim is the system claim to method claim 1 and is substantially similar to claim 1 and is therefore rejected using the same references and rationale. Regarding claim 13: With regards to claim 13, this claim is substantially similar to claim 3 and is therefore rejected using the same references and rationale. Regarding claim 14: With regards to claim 14, this claim is substantially similar to claim 4 and is therefore rejected using the same references and rationale. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over GLAZMAN (US20210179117A1) in view of DIMITROV (US 20230071443 A1) in further view of BOTONJIC (US 20200294266 A1) in further view of HSU (A Review and Perspective on Optical Phased Array for Automotive LiDAR; IEEE JOURNAL OF SELECTED TOPICS IN QUANTUM ELECTRONICS, VOL. 27, NO. 1, JANUARY/FEBRUARY 2021). Regarding claim 2: GLAZMAN in view of DIMITROV in further view of BOTONJIC discloses the method of claim 1 and GLAZMAN does not disclose, but DIMITROV teaches: each of the plurality of time-of-flight sensors (see at least DIMITROV, ¶ 0011, “A method for imaging a scene in front of a vehicle may include: receiving, via a processing device and a memory, a first point cloud from a first LiDAR sensor mounted at a first location behind a vehicle grille, the first point cloud representing the scene in front of the vehicle, wherein as a result of the grille the scene represented by the first point cloud data may be partially occluded with a first pattern of occlusion receiving, via the processing device and the memory, a second point cloud from a second LiDAR sensor mounted at a second location behind the vehicle grille combining the first and second point clouds to generate a composite point cloud data set, wherein the first location of the first LiDAR sensor may be located relative to the second location of the second LiDAR sensor such that when a point cloud data for the first optical sensor and the second optical sensor are combined, the first pattern of occlusion may be at least partially compensated; and processing the combined point cloud data set.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the depth sensor system for monitoring car cabin within GLAZMAN to use multiple time-of-flight sensors such as LIDAR for data collection to then be combined together for eliminating occlusions within DIMITROV to yield an effective car interior monitoring system that can robustly visualize around obstructions within the vehicle. GLAZMAN in view of DIMITROV does not disclose, but HSU teaches. includes a LiDAR module configured to detect light having a wavelength of at least 1500 nm. (see at least HSU, Page 5, Section III.A, ¶ 3, "Eye safety is one of the main reasons behind the proposal of 1550 nm as emerging alternative for automotive LiDAR; higher permissible optical power correlates to better performance (higher signal to noise ratio, increased robustness to noise factors) and increased Rmax") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the car interior system with multiple LiDAR’s within GLAZMAN in view of DIMITROV to utilize lasers with wavelengths higher than 1500 nm to yield higher resolution point clouds as the increased permitted power limit allows for a higher signal-to-noise ratio. Claims 11 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over GLAZMAN (US20210179117A1) in view of DIMITROV (US 20230071443 A1) in further view of BOTONJIC (US 20200294266 A1) in further view of TORABI (US 20210402942 A1). Regarding claim 11: GLAZMAN in view of DIMITROV in further view of BOTONJIK discloses the method of claim 1 and GLAZMAN further discloses: a powertrain that is configured to drive a motion of a vehicle, wherein the powertrain includes at least (see at least GLAZMAN, ¶ 0142, “Computing device 108 may be in communication with an accelerometer 150 that senses acceleration and/or deceleration of vehicle 104. Accelerometer 150 may be installed within vehicle 104 (e.g., within one or more sub-systems 112A) and/or accelerometer 150 may be installed within computing device 108. The acceleration and/or deceleration data outputted by accelerometer 150 may trigger one or more events for computation and/or analysis of the point cloud. The events may be associated with a tag and/or label (e.g., metadata, pre-classified event) assigned to the data. For example, when the car is accelerating fast (e.g., an indication of reckless driving), when the car is decelerating quickly (e.g., indication of a collision about to occur or a near collision), when the car is not accelerating and/or decelerating (e.g., car is stopped), and/or accelerating and/or decelerating within a ranged defined as normal (e.g., car is driving normally). The event identified based on data outputted by the accelerometer 150 may trigger one or more features described with reference to FIGS. 3-10. Alternatively or additionally, the data outputted by the accelerometer 150 may be analyzed in association with the point cloud data, for example, to identify reckless driving based on abnormal gestures identified based on the point cloud and fast acceleration and/or deceleration. Alternatively or additionally, the data outputted by the accelerometer 150 may be analyzed in association with one or more point clouds (e.g., a sequence and/or video of point clouds captured over time) to differentiate and/or classify identified motion, for example, to differentiate between global movement of the entire care which is captured by the output of accelerometer 150 (e.g., strong wind rocking the car, car slipping on a slipper surface, acceleration and/or deceleration motion of the car) and micro-vibrational movement (e.g., heartbeat and/or gestures and/or motion of passengers and/or driver).”; ¶ 0278, “At 610, instructions are generated for setting and/or adjusting one or more safety features by respective vehicle sub-systems according to the weight and/or height and/or posture and/or gesture and/or personal profile of the driver and/or passengers (referred to herein as the data). The adjustments may be performed by computing device 108 generating instructions according to the data, and transmitting the generated instructions over the vehicle network to the relevant sub-systems.”) an ignition system, (see at least GLAZMAN, ¶ 0180, “At 308, when the driver and/or passenger is identified as prohibited from driving the vehicle and/or not matched to an entry associated with a user allowed to drive the vehicle, an indication of unauthorized attempt at driving the vehicle and/or unauthorized passenger is created.”; ¶ 0186, “The indication may trigger a signal that enables ignition of the car. Alternatively, the ignition of the car is enabled when one or more vehicle parameters have completed automatic adjustment, as described with reference to act 312.”) a steering system, (see at least GLAZMAN, ¶ 0282, “Crash avoidance systems controlled by ECUs are set and/or adjusted according to the data. For example, the braking system, the car steering, and/or the suspension system are adjusted according to the data.”) a transmission system, and (see at least GLAZMAN, ¶ 0132, “Computing sub-systems 112A are installed within vehicle 104, for example, multiple electronic control units (ECU) that execute various functions within the vehicle, and/or transmission control unit (TCU) that controls the transmission of the vehicle.”) a brake system, (see at least GLAZMAN, ¶ 0014, ” At least some of the systems, apparatus, methods, and/or code instructions (stored in a data storage device executable by one or more hardware processors) described herein improve performance of existing computing sub-systems within the vehicle based on output of the point cloud sensor. For example, features executed by existing sub-systems within the vehicle may be automatically controlled based on the instructions generated according to the point cloud(s) outputted by the point cloud sensor. The automatic control of the sub-systems may improve efficiency of existing vehicle sub-systems, for example, adjusting the braking system for optimal braking according to the total weight and/or location of the passengers in the vehicle, which may reduce wear on the brakes and/or result in braking in time to avoid a collision.”) further comprising: processing the point cloud captured by (see at least GLAZMAN, ¶ 0011, “The improvement may be obtained at least based on the point cloud sensor described herein that captures depth data of the cabin, optionally the entire cabin, including multiple occupants located at any possible seating combination available in the cabin"; ¶ 0029, "In a further implementation form of the first aspect, the depth sensor outputting the depth map comprises a point cloud sensor that outputs a point cloud”; ¶ 0269, “At 602, the point cloud is analyzed to estimate the height and/or weight of the driver and/or passengers. It is noted that each car seat may be classified as being occupied by a passenger, or empty, or including one or more inanimate objects. The analysis of the seat may be performed, for example, by an analysis of a correlation between the current point cloud and a point cloud of an empty passenger compartment. Variations in the current point cloud relative to the point cloud of the empty passenger compartment at locations corresponding to seats is indicative of the presence of a passenger or inanimate object. Inanimate objects may be distinguished for human and/or pet passengers based on detected motion, as described herein.”) target operations including (see at least GLAZMAN, ¶ 0099, “At least some of the systems, apparatus, methods, and/or code instructions (stored in a data storage device executable by one or more hardware processors) described herein address the technical problem of improving safety of passengers in a vehicle. Safety is improved based on an analysis of the point cloud described herein to determine the location of passenger(s) within the vehicle compartment, and the weight and/or height of the passengers and/or behavior of the driver and/or passengers. For example, the activation of the airbag(s), seat belt(s), and/or automatic braking, are controlled to provide maximum safety with minimal injury risk to the passengers according to the location of the passengers and/or weight and/or height of the passengers. In another example, the point cloud, or a sequence of point clouds, are analyzed to identify malicious behavior of the driver and/or passengers which may lead to an increase risk of an accident. Safety measures may be activated to mitigate risk due to the malicious behavior, for example, a message to stop the malicious behavior, and/or automatic stopping of the vehicle.”; ¶ 0187, “At 312, instructions are created by computing device 108 for transmission to one or more vehicle sub-systems 112A (e.g., ECU) over the vehicle network 112B (e.g., CAN-bus) for automatically adjusting one or more vehicle features according to the user profile. Each user profile may be associated with customized vehicle parameters, and/or general vehicle parameters, for example, stored as metadata and/or values of predefined fields stored in a database associated with the user profiles and/or stored in a record of the user profile.”) communicating, via the processing circuitry to (see at least GLAZMAN, ¶ 0132; ¶ 0133, “One or more vehicle network(s) 112B are installed within vehicle 104. Networks 112B connect different electronic and/or computing sub-systems 112A within the vehicle, and/or that connect computing sub-systems 112A of the vehicle to externally located computing devices (e.g., using a wireless connection and/or wired connection). Exemplary networks include: canvas, can-fd, flexray, and Ethernet.”) the ignition system, (see at least GLAZMAN, ¶ 0180; ¶ 0186) the steering system, (see at least GLAZMAN, ¶ 0282) the transmission system, and/or (see at least GLAZMAN, ¶ 0132) the brake system of the vehicle, (see at least GLAZMAN, ¶ 0014) a signal to adjust an operation of the vehicle (see at least GLAZMAN, ¶ 0258, "Exemplary safety mitigation instructions include: [0259] Transmitting instructions to the airbag sub-system to deactivate one or more airbags when the passenger and/or driver are sitting in a dangerous position. [0260] Automatically and safety instructing the vehicle to stop at the side of the road, for example, by transmission of instructions to sub-systems handling emergency stopping. [0261] Reducing the volume of music playing in the car. [0262] Disconnecting the phone from the car speaker system. [0263] Transmitting instructions to lower a shade to block sun, for example, when behavior of the driver and/or passenger indicates the user(s) is being blinded by the sun. For example, the driver's head is positioned at an abnormal location for a prolonged period of time to avoid the sun, and/or the driver's hand is positioned as a shade over the eyes of the driver to block out the sun.") GLAZMAN does not disclose, but DIMITROV teaches: the plurality of time-of-flight sensors to determine (see at least DIMITROV, ¶ 0010, “To overcome the problem introduced by the grille pattern and to improve the operation of the sensor system, two or more LiDAR sensors may be optimally placed at a certain distance apart such that when the point cloud data for each sensor are merged together, the interference pattern of the grille is partially or totally removed. In other words, one LiDAR sensor may “see” regions of the scene that are blocked on the other sensor and vice versa. The perception processing software may then combine data from the multiple point clouds to achieve a relatively unobstructed data set.”; ¶ 0011, “A method for imaging a scene in front of a vehicle may include: receiving, via a processing device and a memory, a first point cloud from a first LiDAR sensor mounted at a first location behind a vehicle grille, the first point cloud representing the scene in front of the vehicle, wherein as a result of the grille the scene represented by the first point cloud data may be partially occluded with a first pattern of occlusion receiving, via the processing device and the memory, a second point cloud from a second LiDAR sensor mounted at a second location behind the vehicle grille combining the first and second point clouds to generate a composite point cloud data set, wherein the first location of the first LiDAR sensor may be located relative to the second location of the second LiDAR sensor such that when a point cloud data for the first optical sensor and the second optical sensor are combined, the first pattern of occlusion may be at least partially compensated; and processing the combined point cloud data set.”; ¶ 0020, “In some embodiments, the first pattern of occlusion may be at least partially compensated when occlusions of the first pattern of occlusion are reduced in the combined point cloud data sets. In various embodiments, the first pattern of occlusion may be at least partially compensated when occlusions of the first pattern of occlusion are eliminated in the combined point cloud data sets.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the depth sensor system for monitoring car cabin within GLAZMAN to use multiple time-of-flight sensors such as LIDAR for data collection to then be combined together for eliminating occlusions within DIMITROV to yield an effective car interior monitoring system that can robustly visualize around obstructions within the vehicle. GLAZMAN in view of DIMITROV does not disclose, but TORABI teaches: target steering angles, (see at least TORABI, ¶ 0094, “A steering system 1054, which may include a steering wheel, may be used to steer the vehicle 1000 (e.g., along a desired path or route) when the propulsion system 1050 is operating (e.g., when the vehicle is in motion). The steering system 1054 may receive signals from a steering actuator 1056. The steering wheel may be optional for full automation (Level 5) functionality.”; ¶ 0096, “Controller(s) 1036, which may include one or more system on chips (SoCs) 1004 (FIG. 10C) and/or GPU(s), may provide signals (e.g., representative of commands) to one or more components and/or systems of the vehicle 1000. For example, the controller(s) may send signals to operate the vehicle brakes via one or more brake actuators 1048, to operate the steering system 1054 via one or more steering actuators 1056, to operate the propulsion system 1050 via one or more throttle/accelerators 1052. The controller(s) 1036 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving the vehicle 1000. The controller(s) 1036 may include a first controller 1036 for autonomous driving functions, a second controller 1036 for functional safety functions, a third controller 1036 for artificial intelligence functionality (e.g., computer vision), a fourth controller 1036 for infotainment functionality, a fifth controller 1036 for redundancy in emergency conditions, and/or other controllers. In some examples, a single controller 1036 may handle two or more of the above functionalities, two or more controllers 1036 may handle a single functionality, and/or any combination thereof.”) rates of motion, and/or (see at least TORABI, ¶ 0096) speed changes; (see at least TORABI, ¶ 0096) communicating the target operations to the powertrain to allow for at least partially autonomous control over the motion of the vehicle; and (see at least TORABI, ¶ 0060, “With reference to FIG. 1, the system 100 may include a safety actuator 126 that may be used to send audio and/or visual notifications based on an identified activity in the vehicle (e.g., hands on wheel reminder), and/or to aid in control or actuation decisions (e.g., to activate or deactivate autonomous driving, to execute a safety procedure, etc.). In addition, the safety actuator 126 may carry out one or more actions 128 based on an identified activity in the vehicle (e.g., contacting emergency services when sudden sickness detected, deactivating air-bags based on body position or size, etc.). In some embodiments, audio notifications may be customized based on a level of driver disengagement related to a particular activity. For instance, such audio notifications may relate to activities such as distracted driver notifications for as texting, answering a phone, reading, etc.”; ¶ 0096) based on detection of the unfocused state, wherein adjustment of operation of the vehicle comprises adjusting from a manual mode to an at least semi-autonomous mode. (see at least TORABI, ¶ 0021, “In particular, the current system is capable of accurately identifying driver and passenger in-cabin activities (e.g., based on body position, size of person, classification of gestures, etc.) that may indicate a biomechanical distraction that prevents a driver from being fully engaged in driving a vehicle. Based on identified in-cabin activities, the system may act accordingly by performing one or more actions (e.g., provide notifications, perform a safety maneuver, etc.). For instance, the system may adapt and/or respond to the identified in-cabin activities to address needs and/or requirements related to the driver or passengers based on an identified in-cabin activity. As an example, and based on the driver and/passenger in-cabin activities—determined at a more granular level, such as identifying specific hand gestures, body poses, body postures, occupancy maps, etc.—human-machine interactions may be adjusted and adapted to a current state of the occupants (e.g., by braking the vehicle, contacting emergency services, providing a visual, audible, and/or tactile warning notification, taking over control of the vehicle, surrendering control of the vehicle, etc.).”; ¶ 0042, “In addition, the system 100 may include a hand activity recognizer 110. The hand activity recognizer 110 may use hand bounding boxes 112 to normalize the position of a hand in an image (e.g., by centering a bounding box on the hand). In some embodiments, the hand activity recognition network 114 and classifier 116 may use information output by the body-pose estimator and shape reconstructor 102 (e.g., output 104) to determine a portion (e.g., within the bounding boxes 112) of image data that corresponds to hands of an occupant. The hand activity recognizer 110 may also include a hand recognition network 114 and a classifier 116 (e.g., a DNN(s)). For instance, the hand activity recognition network 114 and classifier 116 may check whether a driver is engaged in one or more distracting activities. In particular, the hand activity recognition network 114 and classifier 116 may use body-pose and shape to determine activities performed by the driver that relate to the driver's hands (e.g., texting, hands on/off wheel, drinking or eating, etc.).”; ¶ 0083, “The method 800, at block B814, includes performing an action. In particular, the first activity corresponding to the left hand and the second activity corresponding to the right hand may be used to make determinations such as whether the person inside the vehicle is engaged in one or more distracting activities. Actions may be performed using, for example the safety actuator 126. For instance, the safety actuator 126 may be used to send audio and/or visual notifications based on an identified activity in the vehicle (e.g., hands on wheel reminder), aid in control or actuation decisions (e.g., to activate or deactivate autonomous driving, to execute a safety procedure, etc.). In addition, the safety actuator 126 may carry out one or more actions based on the first and second activity.”; ¶ 0170, “In some examples, LIDAR technologies, such as 3D flash LIDAR, may also be used. 3D Flash LIDAR uses a flash of a laser as a transmission source, to illuminate vehicle surroundings up to approximately 200 m. A flash LIDAR unit includes a receptor, which records the laser pulse transit time and the reflected light on each pixel, which in turn corresponds to the range from the vehicle to the objects. Flash LIDAR may allow for highly accurate and distortion-free images of the surroundings to be generated with every laser flash. In some examples, four flash LIDAR sensors may be deployed, one at each side of the vehicle 1000. Available 3D flash LIDAR systems include a solid-state 3D staring array LIDAR camera with no moving parts other than a fan (e.g., a non-scanning LIDAR device). The flash LIDAR device may use a 5 nanosecond class I (eye-safe) laser pulse per frame and may capture the reflected laser light in the form of 3D range point clouds and co-registered intensity data. By using flash LIDAR, and because flash LIDAR is a solid-state device with no moving parts, the LIDAR sensor(s) 1064 may be less susceptible to motion blur, vibration, and/or shock.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the vehicle with occupant detection system with multiple LiDAR’s to render a point cloud of a car's interior and passengers within GLAZMAN in view of DIMITROV in further view of BOTONJIC to implement the analysis of driver behavior for distracted driving and accordingly modifying vehicle driving within TORABI to yield a safer vehicle that prevents unsafe maneuvers caused by distracted driving. Regarding claim 15: With regards to claim 15, this claim is substantially similar to claim 10 and 11 combined and is therefore rejected using the same references and rationale. Regarding claim 16: With regards to claim 16, this claim is the system claim to method claim 1 and is substantially similar to claim 1 with elements of claims 10 and 11 rolled in and is therefore rejected using the same references and rationale. Regarding claim 17: With regards to claim 17, this claim is substantially similar to claim 3 and is therefore rejected using the same references and rationale. Regarding claim 18: With regards to claim 18, this claim is substantially similar to claim 4 and is therefore rejected using the same references and rationale. Regarding claim 19: With regards to claim 19, this claim is substantially similar to claim 5 and is therefore rejected using the same references and rationale. Regarding claim 20: With regards to claim 20, this claim is substantially similar to claim 11 and is therefore rejected using the same references and rationale. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over GLAZMAN (US20210179117A1) in view of DIMITROV (US 20230071443 A1) in further view of BOTONJIC (US 20200294266 A1) further view of TORABI (US 20210402942 A1) in further view of HAYMAN (US 20220172475 A1). Regarding claim 21: GLAZMAN in view of DIMITROV in further view of BOTONJIC discloses the method of claim 16 and GLAZMAN does not disclose, but DIMITROV teaches: to generate an additional point cloud from a viewing angle different than field-of-views of the plurality of LiDAR modules. (see at least DIMITROV, ¶ 0011, “A method for imaging a scene in front of a vehicle may include: receiving, via a processing device and a memory, a first point cloud from a first LiDAR sensor mounted at a first location behind a vehicle grille, the first point cloud representing the scene in front of the vehicle, wherein as a result of the grille the scene represented by the first point cloud data may be partially occluded with a first pattern of occlusion receiving, via the processing device and the memory, a second point cloud from a second LiDAR sensor mounted at a second location behind the vehicle grille combining the first and second point clouds to generate a composite point cloud data set, wherein the first location of the first LiDAR sensor may be located relative to the second location of the second LiDAR sensor such that when a point cloud data for the first optical sensor and the second optical sensor are combined, the first pattern of occlusion may be at least partially compensated; and processing the combined point cloud data set.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the depth sensor system for monitoring car cabin within GLAZMAN to use multiple time-of-flight sensors such as LIDAR for data collection to then be combined together for eliminating occlusions within DIMITROV to yield an effective car interior monitoring system that can robustly visualize around obstructions within the vehicle. GLAZMAN in view of DIMITROV does not disclose, but HAYMAN teaches. including a mobile device directed from an external location of the vehicle toward the compartment (see at least HAYMAN, ¶ 0065, “In some example, the user device 830 may include a smartphone or similar mobile user device equipped with appropriate sensors, such as depth sensor, laser scanning sensors, LIDAR sensors, and the like. Accordingly, depth data, laser scan data, LIDAR sensor data, and the like may be collected from the sensors 837 for creating a three-dimensional surface model, having an artificial intelligence engine examine the data, presenting the three-dimensional surface model and related assessment to an insurer computing device, generating damage recommendations, and the like, and described in further detail below. Such assessments and recommendations may provide more accurate damage assessments, including better distinguishing new damage from existing damage. In some examples, the user device 830 may collect sensor data from sensor 837 and subsequently provide the data to a cloud computing network, e.g., for subsequent use by a damage assessment computing platform 710. In some examples, the damage assessment computing platform 710 may use an artificial intelligence engine to analyze the three-dimensional surface model and image/video package of the premises to calculate recommended claim estimates. The damage assessment computing platform 710 may receive measurements of the premises and/or additional property information to determine more accurate damage claim estimates.”; ¶ 0079, “At step 925, the damage assessment computing platform 710 may assess damage to the premises based on the map generated by the sensor data. In some examples, the damage assessment computing platform 710 may analyze the sensor data and/or other premises data in order to assess a scope of damage to the premises. Assessing the damage at step 925 may include identifying one or more steps to mitigate and/or repair damage to designated areas of the premises. Assessing the damage at step 925 may include transmitting a damage report to a display interface that includes an assessed damage condition, as will be described in greater detail below. As part of assessing the damage to the premises at step 925, the damage assessment computing platform 710 may receive and analyze various data components received from the user device in addition to the three-dimensional map, such as point cloud data, historical weather data, data related to previous claims, and the like.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the depth sensor system with multiple LiDAR’s to render a point cloud of a car's interior within GLAZMAN in view of DIMITROV and BOTONJIC to include the point cloud originating from a LiDAR integrated into a mobile device as within HAYMAN to yield a more effective lidar scanning system wherein another viewing angle is introduced. EXAMINERS NOTE: Although HAYMAN does not disclose the phone looking into the vehicle, a person of ordinary skill in the art would recognize that a mobile device can be brought into the vehicle for scanning. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over GLAZMAN (US20210179117A1) in view of DIMITROV (US 20230071443 A1) in further view of BOTONJIC (US 20200294266 A1) in further view of TORABI (US 20210402942 A1) in further view of HONG (US 20190123508 A1). Regarding claim 22: GLAZMAN in view of DIMITROV in further view of BOTONJIC discloses the method of claim 16 and GLAZMAN does not disclose, but DIMITROV teaches: each LIDAR modules comprises (see at least DIMITROV, ¶ 0010, “To overcome the problem introduced by the grille pattern and to improve the operation of the sensor system, two or more LiDAR sensors may be optimally placed at a certain distance apart such that when the point cloud data for each sensor are merged together, the interference pattern of the grille is partially or totally removed. In other words, one LiDAR sensor may “see” regions of the scene that are blocked on the other sensor and vice versa. The perception processing software may then combine data from the multiple point clouds to achieve a relatively unobstructed data set.”; ¶ 0011, “A method for imaging a scene in front of a vehicle may include: receiving, via a processing device and a memory, a first point cloud from a first LiDAR sensor mounted at a first location behind a vehicle grille, the first point cloud representing the scene in front of the vehicle, wherein as a result of the grille the scene represented by the first point cloud data may be partially occluded with a first pattern of occlusion receiving, via the processing device and the memory, a second point cloud from a second LiDAR sensor mounted at a second location behind the vehicle grille combining the first and second point clouds to generate a composite point cloud data set, wherein the first location of the first LiDAR sensor may be located relative to the second location of the second LiDAR sensor such that when a point cloud data for the first optical sensor and the second optical sensor are combined, the first pattern of occlusion may be at least partially compensated; and processing the combined point cloud data set.”; ¶ 0020, “In some embodiments, the first pattern of occlusion may be at least partially compensated when occlusions of the first pattern of occlusion are reduced in the combined point cloud data sets. In various embodiments, the first pattern of occlusion may be at least partially compensated when occlusions of the first pattern of occlusion are eliminated in the combined point cloud data sets.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the depth sensor system for monitoring car cabin within GLAZMAN to use multiple time-of-flight sensors such as LIDAR for data collection to then be combined together for eliminating occlusions within DIMITROV to yield an effective car interior monitoring system that can robustly visualize around obstructions within the vehicle. GLAZMAN in view of DIMITROV does not disclose, but HONG teaches. a light source, (see at least HONG, ¶ 0033, “In accordance with various embodiments, a technical solution can be provided for performing optical detection and ranging. A sensor system can comprise a light source generating a light pulse that is collimated, and a plurality of optical elements. Each of the plurality of optical elements is configured to rotate independently about an axis that is substantially common, and the plurality of optical elements operate to collectively direct the light pulse to one or more objects in an angle of view of the sensor system. Furthermore, the sensor system can comprise a detector configured to receive, via the plurality of optical elements, at least a portion of photon energy of the light pulse that is reflected back from the one or more objects in the angle of view of the sensor system, and convert the received photon energy into at least one electrical signal.”) a sensor configured to detect reflection of light emitted by the light source off of a target surface, (see at least HONG, ¶ 0033) a controller configured to monitor time-of- flight of light pulses emitted by the light source and returned to the sensor, (see at least HONG, ¶ 0044, “In accordance with various embodiments of the present invention, a measuring circuitry, such as a time-of-flight (TOF) unit 107, can be used for measuring the TOF in order to detect the distance to the object 104. For example, the TOF unit 107 can compute the distance from TOF based on the formula t=2D/c, where D is the distance between the sensor system and the object, c is the speed of light, and t is the time that takes for light to take the round trip from the sensor system to the object and back to the sensor system. Thus, the sensor system 110 can measure the distance to the object 104 based on the time difference between the generating of the light pulse 111 by the light source 101 and the receiving of the return beam 112 by the detector 105.”) optics, and (see at least HONG, ¶ 0033) one or more motors that control movement of the optics; (see at least HONG, ¶ 0039, “As shown in FIG. 1, the collimated light can be directed toward a beam steering/scanning device 103, which can induce deviation of the incident light. In accordance with various embodiments, the beam steering/scanning device 103 can steer the laser light to scan the environment surrounding the sensor system 110. For example, the beam steering device 103 can comprises various optical elements such as prisms, mirrors, gratings, optical phased array (e.g. liquid crystal controlled grating), or any combination thereof. Also, each of these different optical elements can rotate about an axis 109 that is substantially common (hereafter referred as a common axis without undue limitation), in order to steer the light toward different directions. I.e., the angle between rotation axes for different optical elements can be the same or slightly different. For example, the angle between rotation axes for different optical elements can be within a range of 0.01 degree, 0.1 degree, 1 degree, 2 degree, 5 degree or more.”; ¶ 0046, “FIG. 2 shows a schematic diagram of an exemplary LIDAR sensor system using a Risley prism pair, in accordance with various embodiments of the present invention. As shown in FIG. 2, the LIDAR sensor system 200 can use a Risley prism pair, which may comprise two prisms 211-212, for light steering/scanning (i.e. functioning as the beam scanning/steering device 103 in the scheme as shown in FIG. 1). For example, the two prisms 211-212 may be placed next to each other in a parallel fashion. In various embodiments, the prisms 211-212 may have a round cross section and the central axes for the prisms 211-212 may coincide with each other or with small angle. In various embodiments, the motor (and/or other power/control units) can cause the prisms 211-212 to rotate about the common axis 209 (e.g. the central axis). I.e., the angle between rotation axes for different optical elements can be the same or slightly different. For example, the angle between rotation axes for different optical elements can be within a range of 0.01 degree, 0.1 degree, 1 degree, 2 degree, 5 degree or more.”) the optics comprises lenses or mirrors that are configured to change an angle of emission for the light pulses and/or return the light pulses to the sensor; and (see at least HONG, ¶ 0033; ¶ 0039) the optics includes (see at least HONG, ¶ 0033) a first lens or mirror associated with the light source and (see at least HONG, ¶ 0033) a second lens or mirror associated with the sensor (see at least HONG, ¶ 0034, “In accordance with various embodiments, a technical solution can be provided for performing optical detection and ranging. A sensor system can comprise a light source that operates to generate a series of light pulses at different time points, and a plurality of optical elements, wherein each of the plurality of optical elements is configured to rotate independently about an axis that is substantially common. Furthermore, the sensor system can comprise a controller that operates to control respective rotation of each of the plurality of optical elements, in order to collectively direct the series of light pulses to different directions in an angle of view of the sensor system, and a detector configured to detect a plurality of target points in the angle of view, wherein each target point is detected based on receiving at least a portion of photon energy of a light pulse in the series of light pulses that is reflected back from one or more objects in the angle of view.”) such that the first lens or mirror is moved by the one or more motors to guide the light emitted by the light source and the second lens or mirror is driven by the one or more motors to guide the light reflected off the target surface and returned to the sensor. (see at least HONG, ¶ 0039; ¶ 0046) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify, with a reasonable expectation of success, the LiDAR’s used to render a point cloud of a car's interior within GLAZMAN in view of DIMITROV to use a LIDAR with a motorized set of prisms/lens to steer the transmitted and received beam within HONG to effectively yield a car interior monitoring system that can robustly visualize around obstructions within the vehicle. Claims 23 and 25 rejected under 35 U.S.C. 103 as being unpatentable over GLAZMAN (US20210179117A1) in view of DIMITROV (US 20230071443 A1) in further view of BOTONJIC (US 20200294266 A1) further view of HAYMAN (US 20220172475 A1). Regarding claim 23: With regards to claim 23, this claim is substantially similar to claim 21 and is therefore rejected using the same references and rationale. Regarding claim 25: With regards to claim 25, this claim is substantially similar to claim 21 and is therefore rejected using the same references and rationale. Claims 24 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over GLAZMAN (US20210179117A1) in view of DIMITROV (US 20230071443 A1) in further view of BOTONJIC (US 20200294266 A1) in further view of HONG (US 20190123508 A1). Regarding claim 24: With regards to claim 24, this claim is substantially similar to claim 22 and is therefore rejected using the same references and rationale. Regarding claim 26: With regards to claim 26, this claim is substantially similar to claim 22 and is therefore rejected using the same references and rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure WEYERS (US 20210081689 A1) ¶ 0073, “FIG. 8 shows an illustration 800 of a classification system using the hand crops 304, 306 and the 3D skeleton data 206 according to various embodiments. The hand crops 304, 306 (in other words: sub images corresponding to the hand regions) may be fed into one or multiple convolutional neural networks (CNN) (for example, the hand crop 304 of the left hand may be fed into a first neural network 802, and the hand crop 306 of the right hand may be fed into a second neural network 804) to encode the image content to some embeddedings (feature vectors 806, 808, which may be learnt by the network, and which may have or may not have a meaning that can be interpreted by a human operator). The first neural network 802 and the second neural network 804 may be identical, for example trained identically, or different, for example networks with different structure, or networks with identical structure but trained differently, for example with different training data, or partially different (with some parts identical, and some parts different, for example with partially different structure, or with parts which are trained differently). The feature vectors 806, 808 from both the left hand and the right hand may be concatenated (or appended). Furthermore, based on the 3D body keypoints of the 3D skeleton data 206, 3D coordinates (and optionally the respective uncertainties) may be included in a further feature vector 810. The feature vectors 806, 808 and the further feature vector 810 may be appended to obtain a joined feature vector 812.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAFAEL VELASQUEZ VANEGAS whose telephone number is (571)272-6999. The examiner can normally be reached M-F 8 - 4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VIVEK KOPPIKAR can be reached at (571) 272-5109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAFAEL VELASQUEZ VANEGAS/Patent Examiner, Art Unit 3667 /JOAN T GOODBODY/Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
Apr 16, 2025
Non-Final Rejection — §103
Jul 31, 2025
Response Filed
Sep 18, 2025
Final Rejection — §103
Nov 19, 2025
Response after Non-Final Action
Dec 09, 2025
Request for Continued Examination
Dec 17, 2025
Response after Non-Final Action
Jan 20, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month