Prosecution Insights
Last updated: April 19, 2026
Application No. 18/711,655

METHOD AND APPARATUS FOR CONTROLLING VEHICLE-RIDING SAFETY, ELECTRONIC DEVICE AND PRODUCT

Final Rejection §103§112
Filed
May 20, 2024
Examiner
TRIEU, VAN THANH
Art Unit
2685
Tech Center
2600 — Communications
Assignee
Great Wall Motor Company Limited
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
909 granted / 1076 resolved
+22.5% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
33 currently pending
Career history
1109
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
44.6%
+4.6% vs TC avg
§102
36.7%
-3.3% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1076 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-7, 9, 11-19 and 23-25 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In claim 1, the claim limitation of “based on the human-face features of the optimum human-face image, determining the ages of the objects”, which is not clearly described in the Specification filed on 05/20/2024. The specification only discloses: Paragraph [0071] Step A24: tracking and locating the human face of the object in the plurality of frame images, extracting the human-face features of the object and placing into the plane coordinate system, to obtain an optimum human-face image from the plurality of frame images. Paragraph [0072] As an example, the optimum human-face image is a direct image of the face in which the two eyes completely fall within the plane coordinate system and the pixels are clear. [0073] Step A25: extracting the human-face features of the optimum human-face image. Furthermore, Figure 2 does not provide step to determining the ages of the objects. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 9, 11-16, 17-19, 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Katz [US 2023/0347903] in view of Faith et al [US 2017/0308909] Consider claim 1. (Currently Amended) A method for controlling vehicle-riding safety (the DMS and OMS for monitoring and tracking in-vehicle dynamic driver, occupant or other person gaze tracking, see abstract, para [0042-0047]), comprising: determining a target object from objects that ride in a vehicle, wherein the target object comprises at least one of a child with a preset age interval from 0 year to 12 years, a person having mental retardation and a disabled person having difficulty in moving (read upon the occupant monitoring system "OMS" monitors and determines one or more occupants of a vehicle to be monitored and tracked, such as a particular child/baby being seated on a person’s lap, 4 children in rear seat or the like, and including of passenger/person with mental distress, physical condition, sickness, see Figs. 1-5, para [0042, 0045, 0046, 0050, 0093]); obtaining an action trajectory of the target object (the image sensors 6 such as CCD or CMOS, camera and/or video to obtain the images or videos to indicate action, motion, movement, posture and/or orientation of one or more body parts of a driver, object, passenger, location of users or users' body parts, see Figs. 1-5, para [0071, 0075, 0093, 0100, 0137, 0138]); based on the action trajectory, predicting a target action of the target object at a future moment (the processor 12 is predicting gestures, motion, body posture, features associated with object 24, occupant, child or another person attempted to open the door/window or to reaching out of the door/window, item or object in back seat, see Fig. 1, para [0072, 0089-0094, 0135, 0138-0141]); based on the target action and a vehicle state of the vehicle, determining a dangerous situation of the target object at the future moment; and executing a control strategy corresponding to the dangerous situation, wherein the control strategy comprises at least a prompting message, and the prompting message is for prompting other objects than the target object that ride in the vehicle that the target object is about to be in danger (the processor 12 provides at least one of the message, command, or alert may be associated with at least one of: a first indication of a level of danger of picking up or interacting with the mobile device; or a second indication that the driver can safely interact with the mobile device, wherein the at least one processor 12 is further configured to determine the first indication or the second indication using information associated with at least one of: a road condition, a driver condition, a level of driver attentiveness to the road, a level of driver alertness, one or more vehicles in a vicinity of the driver's vehicle, a behavior of the driver, a behavior of other occupants or passengers or other individual's action, an interaction of the driver with other passengers, the driver actions prior to interacting with the mobile device, one or more applications running on a device in the vehicle, a physical state of the driver, or a psychological state of the driver. In some embodiments, an indication of levels of danger, as well as what is classified by the system to be "dangerous" or "safe," may be preprogrammed in one or more rule sets stored in memory or accessed by the at least one processor, or may be determined by a machine learning algorithm trained using data sets indicative of various types of behaviors and driving events, and outcomes indicative of actual or potential harm to persons or property. Wherein the at least one generated message, command, or alert causes an output device to communicate to the individual a warning associated with a level of danger of the interaction or the attempted operation, see Figs. 5, 8, para [0072, 0089, 0090, 0320, 0322, 0341, 0564, 0565]); and wherein when the target object is the child, determining the target object from the objects that ride in the vehicle comprises (the child/baby presence in the vehicle, see Figs. 1-5, para [0045, 0046, 0093]): collecting in real time a video containing the objects that ride in the vehicle (the processor 12 receives information data from the child/baby, driver, passengers and/or object in real time execution period, see Figs. 1-5, para [0045, 0086, 0087]); obtaining of a plurality of frame images contained in the video (the processor 12 receives captured images, frame rate from the video, see Figs. 3, 12, para [0106]); establishing a plane coordinate system in each of the plurality of frame images (read upon the processor 12 may detect and analyze visual cues in accordance with image processing techniques disclosed herein, such as contour recognition, feature recognition and tracking, pattern matching, and machine learning algorithms trained using image information of the vehicle interior and/or image information of other vehicle interiors. In some embodiments, one or more of the cues may be associated with the driver. In such embodiments, the cues may relate to one or more visual facial features of the driver, such as a location and orientation of the driver's eyes, nose, ears, hair, chin, jaw line or other contour, or any other visual features that can be used to observe a change in the image sensor calibration. In some embodiments, cues associated with the driver may include a contour of the driver's body or body part, or features of one or more body parts of the driver. In some embodiments, the processor may detect an orientation of the driver, such as an orientation of the driver relative to the image sensor, and at least one of the cues may be associated with a detected orientation of the driver, see Figs. 12, 13, para [0267, 0268]); tracking and locating human faces of the objects in the plurality of frame images, extracting human-face features of objects and placing the human-face features of objects int the plane coordinate system (the processor 12 is tracking and extracting the referenced gaze of a driver or user, see Figs. 5, 7E, para [0096, 0111, 0172, 0206, 0220]), to obtain an optimum human-face image from the plurality of frame images (read upon the one or more processors may predict the amount of time based on at least one of the current driver gaze direction, such as gaze direction 500 shown in FIG. 8, or actual “optimum” gaze direction 1300, shown in FIG. 11, para [0255]); extracting human-face features of the optimum human-face image (the processor 12 may extracting face gazing of a driver/user, see Figs. 5, 8, 7E, 11, para [0096, 0111, 0172, 0206, 0220, 0255]); and based on the ages of the objects that ride in the vehicle, determining the target object (read upon the particular or target person such as a child/baby, a driver, an object or a person is determined by the DMS and/or OMS that tracks the driver and reports the driver's identity, demographics (gender and age), state, health, physical condition, emotional condition, cognitive load, actions, behaviors, driving performance, distraction, drowsiness. DMS may include modules that detect or predict gestures, motion, body posture of a child or person presence in the car (see para [0042, 0045, 0046]). But Katz fails to disclose based on the human-face features of the optimum human-face image, determining the ages of the objects. However, Katz teaches that the DMS and/or OMS may comprise a system that tracks the driver and reports the driver's identity, demographics (gender and age), state, health, physical condition, emotional condition, cognitive load, actions, behaviors, driving performance, distraction, drowsiness. DMS may include modules that detect or predict gestures, motion, body posture, features associated with user alertness, driver alertness, fatigue, attentiveness to the road, distraction, features associated with expressions or emotions of a user, features associated with gaze direction of a user, driver or passenger, showing signs of sudden sickness, or the like (see para [0042, 0045]). Faith et al suggests that the video analyzer 216 may analyze the video feed data to detect the person's face using facial detection techniques. Video analyzer 216 may also determine the age, race, and dressing-style of the target person recognized in the video feed data. As discussed above, a detected face may be extracted and representative facial data may be determined (see Fig. 5, para [0144]). Therefore, it would have been obvious to one skill in the art before the effective filing date of the invention to use or implement the analyzer to determine an age of a person based on the video feed data of a target person’s face of Faith et al with the DMS or OMS of Katz for providing an accuracy age of a child/baby, driver or passenger being captured, monitored and tracked by the video system. Consider claim 2, (Original) The method for controlling the vehicle-riding safety according to claim 1, wherein the step of, based on the target action and the vehicle state of the vehicle, determining the dangerous situation of the target object at the future moment (as cited in respect to claim 1 above) comprises: obtaining a target region where the target object is located when the target object executes the target action (the processor is programmed to store sensed data information and executable the activities of a driver, child or passenger seated inside the vehicle (see para [0034, 0040]); and when at least local region of the target region is not located within a safe region that is predetermined, based on the vehicle state of the vehicle and a dangerous region where the at least local region is belonged, determining the dangerous situation of the target object at the future moment (reads upon the vehicle driving behaviors to provide one or more event notifications when the driver or passenger unbuckled a seatbelt or interacting with the device or texting on the mobile device or watching game on the mobile device (which is not located within a safety area/region) during or while the vehicle is driving, then the processor executes and generates an alert or notification to indicate of a dangerous situation in the future, see para [0066, 0090]). Consider claim 3. (Original) The method for controlling the vehicle-riding safety according to claim 1, wherein the step of obtaining the action trajectory of the target object comprises: obtaining a video containing the target object (as cite in respect to claim 1 above); obtaining human-body gestures of the target object in each frame image contained in the video, wherein each of the human-body gestures is formed by a plurality of articulation points (the body's parts of a driver, occupant, passenger, person or user, see para [0045, 0063, 0069, 0070, 0074, 0075]); and based on a variation trend of position information of same articulation points in the human-body gestures corresponding to each frame image, determining the action trajectory of the target object (the user body gestures, see para [0032, 0045, 0063, 0093]). Consider claim 4. (Currently Amended) The method for controlling the vehicle-riding safety according to claim 2, wherein the step of obtaining the target region where the target object is located when the target object executes the target action comprises: obtaining an age of the target object (see para [0042, 0045]); from a predetermined corresponding relation between ages and body-height thresholds, looking up a target body-height threshold of the target object corresponding to an age; based on the target body-height threshold and the human-body gestures of the target object, obtaining contours of body parts of the target object (age and posture of a user different between a child, an adult or gender, and a predetermined a dynamic threshold or scale determined for the individual user, see para [0050, 0093, 0094]); determining a target body part executing the target action of the target object; and based on position information of a contour of the target body part at a current time, determining the target region where the contour of the target body part is located when the target body part is executing the target action (as cited in respect to claims 1 and 3 above, and including the contour of a portion of the user's body, see para [0033, 0153, 0181]). Consider claim 5. (Currently Amended) The method for controlling the vehicle-riding safety according to claim 1, wherein the step of, based on the action trajectory, predicting the target action of the target object at the future moment comprises: obtaining text information of the target object corresponding to voice information; and based on the text information and the action trajectory, predicting the target action of the target object at the future moment (as cited in respect to claim 1 above, and wherein the predict actions include talking, texting message, speech and/or voice, see para [0043, 0047, 0086, 0090]). Consider claim 6. (Currently Amended) The method for controlling the vehicle-riding safety according to claim 2, wherein the safe region comprises a first safe region (the driver seat region) and/or a second safe region (the front seat region or the rear seat region for passengers seated inside the vehicle), and the safe region is determined by: from a predetermined corresponding relation between ages and body-height thresholds, looking up a target body-height threshold of the target object corresponding to an age; and when the target object is sitting in a safety seat, determining, when an object having the target body-height threshold is correctly sitting in the safety seat, a region where the object is located to be the first safe region (as cited in respect to claim 4 above, wherein the seat validity or seatbelt for a child, an adult, pregnant person or gender to sit in the seat, see para [0093]); and/or when the target object is not sitting in the safety seat, determining a region other than a predetermined dangerous region to be the second safe region, wherein the dangerous region comprises at least one of a region where a door handle is located, a region where a car-window opening press key is located and a region where a car-door gap is located (as cited above, and including the danger of opening a window, getting in or out of the vehicle, a driver attempts to closing/opening a door or window, see para [0043, 0048, 0090]). 8. (Cancelled) Consider claim 9. (Currently Amended) An electronic device, wherein the electronic device comprises comprising: a processor (see para [0034-0036]); and a memory configured to store an instruction executable by the processor (see para [0034, 0040]); and wherein the processor is configured to execute the instruction to implement the method for controlling the vehicle-riding safety according to claim 1 (as cited in respect to claim 1 and including the processor is programmed to execute the instructions, see para [0073, 0079]). 10. (Cancelled) Consider claim 11. (Currently Amended) A non-transitory computer-readable storage medium, wherein when an instruction in the non-transitory computer-readable storage medium is executed by a processor of an electronic device, the electronic device is enabled to be capable of implement the method for controlling the vehicle-riding safety according to claim 1 (as cited in respect to claims 1 and 9 above, and including the non- transitory computer, see para [0590, 0591, 0613]). Consider claim 12. The electronic device according to claim 9, wherein the operation of, based on the target action and the vehicle state of the vehicle, determining the dangerous situation of the target object at the future moment comprises: obtaining a target region where the target object is located when the target object executes the target action; and when at least local region of the target region is not located within a safe region that is predetermined, based on the vehicle state of the vehicle and a dangerous region where the at least local region is belonged, determining the dangerous situation of the target object at the future moment (as cited in respect to claim 2 above). Consider claim 13. The electronic device according to claim 9, wherein the operation of, obtaining the action trajectory of the target object comprises: obtaining a video containing the target object (the image sensors 6 such as CCD or CMOS, camera and/or video to obtain the images or videos to indicate action, motion, movement, posture and/or orientation of one or more body parts of a driver, object, passenger, location of users or users' body parts, see Figs. 1-5, para [0071, 0075, 0093, 0100, 0137, 0138]); obtaining human-body gestures of the target object in each frame image contained in the video, wherein each of the human-body gestures is formed by a plurality of articulation points (as cited in respect to claim 3 above, and including the capturing or tagging frames or images from the video, see para [0038, 0039, 0106]); and based on a variation trend of position information of same articulation points in the human-body gestures corresponding to the each frame image, determining the action trajectory of the target object (as cited in respect to claims 1, 3-5 above). Consider claim 14. The electronic device according to claim 12, wherein the operation of, obtaining the target region where the target object is located when the target object executes the target action comprises: obtaining an age of the target object; from a predetermined corresponding relation between ages and body-height thresholds, looking up a target body-height threshold of the target object corresponding to an age; based on the target body-height threshold and the human-body gestures of the target object, obtaining contours of body parts of the target object (as cited in respect to claim 4 above); determining a target body part executing the target action of the target object; and based on position information of a contour of the target body part at a current time, determining the target region where the contour of the target body part is located when the target body part is executing the target action (the captured contour images and activities of an individual, driver or passenger during the current driving cession in real time, see para [0033, 0051, 0060, 0063, 0070, 0094, 0255, 0256, 0268]). Consider claim 15. The electronic device according to claim 9, wherein the operation of, based on the action trajectory, predicting the target action of the target object at the future moment comprises: obtaining text information of the target object corresponding to voice information; and based on the text information and the action trajectory, predicting the target action of the target object at the future moment (as cited in respect to claim 5 above). Consider claim 16. The electronic device according to claim 12, wherein the operation of, the safe region comprises a first safe region and/or a second safe region, and the safe region is determined by: from a predetermined corresponding relation between ages and body-height thresholds, looking up a target body-height threshold of the target object corresponding to an age; and when the target object is sitting in a safety seat, determining, when an object having the target body-height threshold is correctly sitting in the safety seat, a region where the object is located to be the first safe region; and/or when the target object is not sitting in the safety seat, determining a region other than a predetermined dangerous region to be the second safe region, wherein the dangerous region comprises at least one of a region where a door handle is located, a region where a car-window opening press key is located and a region where a car-door gap is located (as cited in respect to claim 6 above). Consider claim 18. The non-transitory computer-readable storage medium according to claim 11, wherein the operation of, based on the target action and the vehicle state of the vehicle, determining the dangerous situation of the target object at the future moment comprises: obtaining a target region where the target object is located when the target object executes the target action; and when at least local region of the target region is not located within a safe region that is predetermined, based on the vehicle state of the vehicle and a dangerous region where the at least local region is belonged, determining the dangerous situation of the target object at the future moment (as cited in respect to claims 1 and 9 above). Consider claim 19. The non-transitory computer-readable storage medium according to claim 11, wherein the operation of, obtaining the action trajectory of the target object comprises: obtaining a video containing the target object; obtaining human-body gestures of the target object in each frame image contained in the video, wherein each of the human-body gestures is formed by a plurality of articulation points; and based on a variation trend of position information of same articulation points in the human-body gestures corresponding to the each frame image, determining the action trajectory of the target object (as cited in respect to claims 1 and 13 above). Consider claim 23. (New) The method for controlling the vehicle-riding safety according to claim 1, wherein the step of obtaining the action trajectory of the target object comprises: obtaining videos containing the target object that are collected by cameras in different directions (the direction of cameras 1110 may face either further toward the driver, further away from the driver, and further upward directions, or image sensor orientation has changed, see Fig. 11, para [0263, 0266]). The vehicle includes an image sensor 6, 1701 and 1702, IR camera and videos, see Figs. 12, 17, para [0068, 0071, 0074, 0096, 0137]). obtaining a plurality of frame images contained in the videos (as cited in respect to claim 1 above, such as the processor 12 received images from at least one cameras, videos and IR camera, para [0103, 0212]); and regarding each of the plurality of frame images, obtaining a two-dimensional coordinate (u,v) in a two-dimensional coordinate system of the target object in the image (the 2-D image sensors 6, see Figs. 1, 2, para [0100, 0137, 0143]), and converting the two-dimensional coordinate into a three-dimensional coordinate (X,Y,Z) (the control boundary may be representative on orthogonal projection of the physical edges of a device into 3D space display or head pose, gaze, face and facial attribute 3D, virtual 3D, and 3D reconstruction of the environment around the vehicle, see Fig. 1, para [0032, 0044, 0091, 0092, 0154, 0167]). Consider claim 24. (New) The method for controlling the vehicle-riding safety according to claim 23, wherein after the step of regarding the each of the plurality of frame images, obtaining the two-dimensional coordinate (u,v) in the two-dimensional coordinate system of the target object in the image, and converting the two-dimensional coordinate into the three-dimensional coordinate (X,Y,Z) (as cited in respect to claim 23 above), the method further comprises: adding the three-dimensional coordinates of the target object in the images contained in the videos into the three-dimensional coordinate system (read upon the 3D image of a child, driver, object or passenger is presented or added on a 3D display 4, 3D map, physical edges of the device or some other physical dimension of the display for a user to view, see Fig. 3, para [0032, 0174-0178]); obtaining a plurality of three-dimensional-position sets, wherein each of the three- dimensional-position sets include the three-dimensional coordinates corresponding to same collection time (as above, and the processor 12 may be configured to perform different actions based on the number of times a control boundary is crossed or a length of the path of the gesture relative to the physical dimensions of the user's body. For example, an action may be caused by the processor based on a number of times that each edge or corner of the control boundary is crossed by a path of a gesture. In some embodiment, a dimension of time may be associated with the 3D mapping, see Figs. 3, 10, para [0197, 0235]); inputting each of the three-dimensional-position sets into a gesture identifying model that is pre-constructed, to obtain human-body gestures corresponding to the three-dimensional coordinates (the gesture location, as used herein, may refer to one or a plurality of locations associated with a gesture. For example, a gesture location may be a location of an object or gesture in the image information as captured by the image sensor, a location of an object or gesture in the image information in relation to one or more control boundaries, a location of an object or gesture in the 3D space in front of the user, a location of an object or gesture in relation to a device or physical dimension of a device, or a location of an object or gesture in relation to the user body or part of the user body such as the user's head. For example, a “gesture location” may include a set of locations comprising one or more of a starting location of a gesture, intermediate locations of a gesture, an ending location of a gesture and type of gesture (see Figs. 3, 5A-5L, 6, para [0154, 0166, 0167]); and regarding any two three-dimensional-position sets whose time is consecutive, connecting corresponding articulation points in the human-body gestures corresponding to the two three-dimensional-position sets, to obtain the action trajectory in the three-dimensional coordinate system (as above, and FIGS. 5A-5L illustrate graphical representations of example motion paths that may be associated with touch-free gesture systems and methods consistent with the disclosed embodiments. Each differing combination of motion path and gesture bay result in a differing action, see Fig. 6, para [0012, 0110, 0166, 0187-0197]). Consider claim 25. (New) The method for controlling the vehicle-riding safety according to claim 1, wherein after determining the target object from objects that ride in the vehicle, likes of the target object is analyzed based on an artificial intelligence (AI) algorithm, to analyze out an article that the target object is interested in (read upon the machine learning system may be implemented in various ways including linear and logistic regression, linear discriminant analysis, support vector machines (SVM), decision trees, random forests, ferns, Bayesian networks, boosting, genetic algorithms, simulated annealing, convolutional neural networks (CNN) or AI, (see para [0073, 0079, 0080, 0082, 0267]). Claims 7, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Katz [US 2023/0347903] and Faith et al [US 2017/0308909] and further in view of Jeong [US 2018/0179790] Consider claim 7. (Original) The method for controlling the vehicle-riding safety according to claim 6, wherein the step of, based on the vehicle state of the vehicle and the dangerous region where the at least local region is belonged, determining the dangerous situation of the target object at the future moment (as cited in respect to claim 1 above), comprises: when the vehicle state is a travelling state, the target object is sitting in the safety seat and the dangerous region is a region other than the first safe region, determining that the dangerous situation is that the target object disengages from the safety seat (the dangerous includes a passenger wearing seatbelt incorrectly and/or unbuckling a seatbelt, see para [0043, 0057, 0066, 0090]); and when the vehicle state is that a vehicle speed is greater than a preset value and the dangerous region is a region where the car-window opening press key is located, determining that the dangerous situation is that the target object is about to open a car window (which reads upon the vehicle speed acceleration/deceleration, suddenly stop being considered as a dangerous condition of a driving behaviors or in response to an emergency event, and the driver/passenger activities include to opening a door/window or reaching through the door or window while the vehicle is in dangerous situation, see para [0043, 0048, 0090, 0115]). But Katz fails to disclose when the vehicle state is the travelling state, a child safety lock is not turned on and the dangerous region is a region where the door handle is located, determining that the dangerous situation is that the child safety lock is not turned on. However, Katz teaches that the detection system may comprise one or more components embedded in the vehicle or be part of the mobile device, such as the processor, camera, or microphone of the mobile device. In other embodiments, the mobile device could be another device or system in the car, such as the entertainment system, HVAC controls, or other vehicle systems that the driver should not be interacting with while driving. In yet another embodiment, the detection system may be a part of the vehicle, such as the hand brake, buttons, knobs, or door locks of the vehicle (see para [0217]). Machine learning components can be used to detect one or more persons, a person's age or gender, a person's ethnicity, a person's height, a person's weight, a pregnancy state, a posture, an abnormal seating position, seat validity (availability of a seatbelt), a posture of the person, seat belt fitting and tightness, an object, presence of an animal in the vehicle, presence and identification of one or more objects in the vehicle, learning the vehicle interior, an anomaly, a damaged item or portion of the vehicle interior, a child/baby seat in the vehicle, a number of persons in the vehicle, a detection of too many persons in a vehicle (e.g. 4 children in rear seat when only 3 are allowed), or a person sitting on another person's lap (see para [0093]). Jeong suggests that the vehicle internal door lock-releasing operation using a child locking member 1700 of the door latch system. The separation of the child locking member 1700 from the connected position or the disconnected position is prevented even when the external impact is applied thereto when the child locking member 1700 is in the connected position or in the disconnected position. That is, the erroneous operation of the child locking member 1700 due to the external impact is prevented. The door 1 cannot be opened from the inside of the vehicle when it is in a child locking state, but the door 1 can be opened only from the outside of the vehicle. Thus, the children and the elderly can be protected from the accidents caused by the unexpected opening and closing of the door 1. (see Figs. 24, 31-33, para [0158, 0447, 0449, 0494]). Therefore, it would have been obvious to one skill in the art before the effective filed date of the invention to add or implement the child lock member of Jeong to the vehicle door locks of Katz and Faith et al for providing a protection, safety and security of a child seating inside the vehicle while driving, since the child safety lock is built-in the vehicle is available to the automobile industries. Katz also fails to disclose when the vehicle state is a stationary state and the dangerous region is a region where the car-door gap is located, determining that the dangerous situation is that the target object is about to be squeezed by a car door. Jeong suggests that when such reduction gear is provided, the speed of the motor 3610 is greatly reduced so that the closing operation of the door through the motor 3610 is smoothly performed and the driving torque is secured as well. In addition, since the speed is reduced when closing the door, the door can be opened emergently when a safety related accident happens wherein a body or clothes are squeezed by the door (see Fig. 43, para [0549]). Therefore, it would have been obvious to one skill in the art before the effective filed date of the invention to add or implement the door closing operation with a safety related accident happens wherein a body or clothes are squeezed by the door of Jeong to the vehicle door locks of Katz and Faith et al for providing a secure and safety to a driver, passenger, individual or user seating inside the vehicle. Consider claim 17. The electronic device according to claim 16, wherein the operation of, based on the vehicle state of the vehicle and the dangerous region where the at least local region is belonged, determining the dangerous situation of the target object at the future moment comprises: when the vehicle state is a travelling state, the target object is sitting in the safety seat and the dangerous region 1s a region other than the first safe region, determining that the dangerous situation is that the target object disengages from the safety seat; when the vehicle state is the travelling state, a child safety lock is not turned on and the dangerous region is a region where the door handle is located, determining that the dangerous situation is that the child safety lock is not turned on; when the vehicle state is a stationary state and the dangerous region is a region where the car-door gap is located, determining that the dangerous situation is that the target object is about to be squeezed by a car door; and when the vehicle state is that a vehicle speed is greater than a preset value and the dangerous region is a region where the car-window opening press key 1s located, determining that the dangerous situation 1s that the target object is about to open a car window (as cited and the combination between Katz and Faith et al and Jeong in respect to claim 1, 7 above). Response to Arguments Applicant’s arguments, see the amendment, filed on 01/20/2026, with respect to the rejection(s) of claim(s) 1 under Katz have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Faith et al [US 2017/0308909] to make the rejection smoother based upon the new amended claim subject matters. Applicant’s arguments: (A) Katz fails to determine a target object from the occupants. However, in claim 1 of the present application, the target object comprises at least one of a child with a preset age interval from 0 year to 12 years, a person having mental retardation and a disabled person having difficulty in moving, which is not disclosed by Katz. Therefore, a person skilled in the art does not know how to determine a dangerous situation of the target object at the future moment, when Katz does not disclose that the target object is a person having mental retardation and a disabled person having difficulty in moving. (B) Katz fails to disclose how to determine a target object from the occupants, a person skilled in the art does not know the detailed steps of determining the target object from the objects that ride in the vehicle based on Katz, and cannot obtain these distinguishing features stated above without paying any creative efforts. Response to the arguments: (A) Katz teaches that the occupant monitoring system (OMS) may be provided to monitor one or more occupants of a vehicle other than the driver. For example, OMS may comprise a system that monitors the occupancy of a vehicle's cabin, detecting and tracking people and objects, and acts according to their presence, position, pose, identity, age, gender, physical dimensions, state, emotion, health, head pose, gaze, gestures, facial features and expressions. In some embodiments, OMS may include one or more modules that detect one or more person, person recognition/age/gender, person, person ethnicity, person height, person weight, pregnancy state, posture, out-of-position (e.g. leg's up, lying down, etc.), seat validity (availability of seatbelt), person skeleton posture, seat belt fitting, an object, animal presence in the vehicle, one or more objects in the vehicle, learning the vehicle interior, an anomaly, spillage, discoloration of interior parts, tears in upholstery, child/baby seat in the vehicle, number of persons in the vehicle, too many persons in a vehicle (e.g. 4 children in rear seat, while only 3 allowed), person sitting on other person's lap, or the like (see para [0045, 0050]). Therefore, the OMS monitoring functions is obvious to determine a specific or type of occupants in the vehicle being recognized, such as a baby/child sitting on a person’s lap, or sitting on a car-seat, the age, gender person, pregnant person and/or health conditions, which can be a target person to be monitored. For example, a driver is a target person being detected and monitored of his/her physiological conditions, gaze movement, physical stress, and/or specific driver behavior being identified by detecting and/or determining driver actions, which may pose of dangerous situations, see para [0052-0054, 0062, 0076]). (B) As cited in section (A) above, wherein the OMS determines and recognizes of a target occupant in a vehicle such as a driver, baby/child, object and/or passenger including his/her gender, age, pregnant and height to be distinguished from each other. Particularly a driver’s physiological conditions and gaze movement may pose dangerous situations during driving a vehicle. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from examiner should be directed to primary examiner craft is Van Trieu whose telephone number is (571) 2722972. The examiner can normally be reached on Mon-Fri from 8:00 AM to 3:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Wang Quan-Zhen can be reached on (571) 272-3114. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair- direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786- 9199 (IN USA OR CANADA) or 571-272-1000. /VAN T TRIEU/ Primary Examiner, Art Unit 2685 03/10/2026
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Oct 22, 2025
Non-Final Rejection — §103, §112
Jan 20, 2026
Response Filed
Mar 10, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599342
PATIENT REQUEST SYSTEM HAVING PATIENT FALLS RISK NOTIFICATION AND CAREGIVER NOTES ACCESS
2y 5m to grant Granted Apr 14, 2026
Patent 12599522
PATIENT SUPPORT APPARATUSES WITH WIRELESS HEADWALL COMMUNICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12600320
VEHICLE ANTI-THEFT DEVICE AND METHOD THEREFOR
2y 5m to grant Granted Apr 14, 2026
Patent 12598449
SYNCHRONIZATION BETWEEN DEVICES IN EMERGENCY VEHICLES
2y 5m to grant Granted Apr 07, 2026
Patent 12590772
Method and System for Sensing, Monitoring, Logging and Transmitting Events That Is Assembled on a Firearm
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
98%
With Interview (+13.0%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 1076 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month