DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is in response to Applicant’s case, no. 18/910,342, with an effective filing date of 10/9/2024. Claims 1-20 are currently pending.
Priority
This is the first office action on the merits of the instant application which was filed 10/9/2024, claiming priority to KR 10-2024-0044433, filed 4/2/2024. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. The application contains claims 1-20.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/9/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description:
Fig. 3 item 25.
Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
In addition to Replacement Sheets containing the corrected drawing figure(s), applicant is required to submit a marked-up copy of each Replacement Sheet including annotations indicating the changes made to the previous version. The marked-up copy must be clearly labeled as “Annotated Sheets” and must be presented in the amendment or remarks section that explains the change(s) to the drawings. See 37 CFR 1.121(d)(1). Failure to timely submit the proposed drawing and marked-up copy will result in the abandonment of the application.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because:
Line 2 contains language that can be implied (i.e., are provided).
A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
____________________________________
The disclosure is objected to because of the following informalities:
[0035] line 8 contains a typographical error where “accompayning” should be corrected to “accompanying”;
[0069] (line 2) item 24 is described as a neural network, but is also described as an emotional classifier in [0070] (line 9), which is construed as a typographical error and should be renumbered to “25”.
Appropriate correction is required.
Claim Objections
Claims 1-3 and 11-12 are objected to because of the following informalities:
claims 1 (i.e., line 10 and line 11) and 11 (lines 14-15 and lines 15-16) contains a typographical error where a threatening factor should be corrected to [[a]] the threatening factor to be in proper form;
claim 12 (line 3) contains a typographical error where a distance should be corrected to [[a]] the distance; and
claim 2 (line 2 and line 10), claim 3 (line 2) and claim 12 (line 9) contains a typographical error where a threatening factor should be corrected to [[a]] the threatening factor.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “normal” in claim 1 (line 13), claim 8 (line 2), and 11 (line 18) is a relative term which renders the claims indefinite. The term “normal” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The limitations regarding the state of the vehicle prior to determining the a detected emotion is anger are rendered indefinite due to the use of the relative term because it is unclear what qualifies as a “normal” state of a vehicle.
Claims 2-10 and 12-20 inherit the rejection of the claim from which they respectively depend from.
____________________________________________
Regarding claim 17 (lines 2, 4, and 6), the limitation recites a feature and it is unclear whether each feature is different from the other or if they are the same feature within differing scopes (e.g., a feature can be the eyes which may be extracted in all three regions, necessarily, or a feature can be the face, the brow-eyes-nose region, and the eyes specifically, depending on scope).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claim(s) 1, 9, 11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sah et al. (US Pat. Pub. No. 2024/0404296 A1), hereinafter referred to as Sah, in view of Nagata et al. (US Pat. No. 10,850,709 B1), hereinafter referred to as Nagata.
Regarding claim 11, Sah discloses:
A device for protecting a vehicle occupant of a vehicle from an event threatening safety of the vehicle, the device comprising:
one or more memory devices configured to store computer-readable instructions ([0036] sentence (s) 1, one or more processing units coupled to a memory); and
one or more processors configured to execute the computer-readable instructions ([0036] s1, as discussed above) wherein the one or more processors are configured to
detect, using a vehicle sensor of the vehicle, a neighboring vehicle of the vehicle or a neighboring person of the vehicle ([0002] s2, when a person or object is sensed entering within a threshold proximity of the vehicle in excess of a period of time, they may be perceived as a security risk),
determine whether the neighboring vehicle or the neighboring person is a threatening factor that threatens safety of the vehicle based on a distance between the vehicle and the neighboring vehicle or the vehicle and the neighboring person (see [0002] s2, as discussed above where a threshold proximity is construed as a distance between a vehicle and an object of interest (OoI),
determine, using facial expression recognition ([0028] s5, threat candidate is identified as a person, a threat classification model may attempt to perform a facial recognition (e.g., to determine if the person is a registered user of the vehicle and/or to infer the person's intent in approaching the vehicle)),
switch a state of the vehicle from a normal state to a threatened state ([0004] s3, threat response system may respond, construed as switching a state when a threat is determined, accordingly based on a threat classification and/or risk score which is based upon the inferred intent of the person approaching the vehicle)
when the state of the vehicle is switched to the threatened state, execute a safe mode (safe mode as defined by the Applicant’s disclosure ([0014]) as controlling the closing of windows, doors, or a sunroof of a vehicle and [0063] windows may be operated based on the threat classification).
Although Sah discloses, in [0004] s2, perform a threat classification and/or intent prediction as the threat candidate continues to approach and s3, threat response system may respond, construed as switching a state when a threat is determined, accordingly based on a threat classification and/or risk score which is based upon the inferred intent of the person approaching the vehicle, does not explicitly disclose:
whether an emotion of a driver of the neighboring vehicle that is determined to be a threatening factor, or of the neighboring person that is determined to be a threatening factor, is anger.
However, Nagata teaches in column (col) 8 lines (ln) 60-66 that facial recognition can be used to determine whether recognized individuals appear happy or content, or whether they appear frightened. AI algorithms can be employed to perform the analysis of the individuals and their interactions to determine an emotional level of the interactions and to determine whether unrecognized individuals are friendly or hostile (or other category of) individuals. This is construed as not being limited to happy or frightened, but also comprising anger as it is reasonable that all emotional facial expressions can be taught to the program in order to identify each respectively. Further, in col 9 ln 20-25 teaches identifying facial expressions and performing an action when a hostile behavior is detected, which is construed as changing a state of the vehicle based on anger. Furthermore, in col 14 ln 28-35 Nagata teaches machine learning techniques can be used to build and train a model to recognize different types of behavior or interplay among the individuals.
Therefore it would have been obvious to one of ordinary skill in the art of facial recognition and vehicle safety controls before the effective filing date of the current invention to modify the vehicle safety control system/method of Sah, by incorporating the facial recognition teachings of Nagata, as acknowledged by Nagata col 15 ln 21-25, such that this allows improved safety due to better response to a possibly dangerous situation.
Claim 1 recites a method having substantially the same features of claim 11 above, therefore claim 1 is rejected for the same reasons as claim 11.
Regarding claim 19
The device of claim 11, wherein the one or more processors are configured to
execute the safe mode to control the vehicle to close one or more of a window of the vehicle, a door of the vehicle, or a sunroof of the vehicle (see claim 1 regarding [0063] windows may be operated based on the threat classification).
Claim 9 recites a method having substantially the same features of claim 19 above, therefore claim 9 is rejected for the same reasons as claim 19.
______________________________
Claim(s) 2-4, 10, 12-14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sah et al. (US Pat. Pub. No. 2024/0404296 A1), hereinafter referred to as Sah, in view of Nagata et al. (US Pat. No. 10,850,709 B1), hereinafter referred to as Nagata, and Garcia Bordes et al. (US Pat. Pub. No. 2012/0041632 A1).
Regarding claim 12, Sah, as modified by Nagata, discloses:
The device of claim 11, wherein the one or more processors are configured to:
measure a distance between the vehicle and the neighboring vehicle or the vehicle and the neighboring person;
when the distance is less than a predetermined safety distance (see claim 1 (see [0002] s2, as discussed above where a threshold proximity is construed as a predetermined safety distance between a vehicle and an OoI and [0028] s2, threat candidate may continue to move towards the vehicle and cross within a closer proximity threshold of the vehicle that triggers the external object detector to a higher tier of operation, which is construed as measuring a distance between the vehicle and the respective OoI),
but Sah, as modified by Nagata, does not disclose:
monitor occurrence of a collision event between the vehicle and the neighboring vehicle or an impact event between the vehicle and the neighboring person; and
when the collision event or the impact event is detected, determine the neighboring vehicle or the neighboring person as a threatening factor that threatens the safety of the vehicle.
However, Garcia Bordes teaches in [0004] s2-4 that the object detection device is configured to detect objects next to and approaching the vehicle. The system uses a position, a speed, an acceleration, and a direction of travel of the detected object to categorize the detected object as a potential threat and indicates a time to collision is less than a predetermined threshold based on whether it is within a first distance of the vehicle. This is construed as monitoring an occurrence of a collision or impact and when an event occurs determining that the detected object is a threat.
Therefore it would have been obvious to one of ordinary skill in the art of facial recognition and vehicle safety controls before the effective filing date of the current invention to modify the vehicle safety control system/method of Sah and the facial recognition teachings of Nagata, by incorporating the object detection teachings of Garcia Bordes, as acknowledged by Garcia Bordes in [0002], such that this allows improved safety system for automobiles.
Claim 2 recites a method having substantially the same features of claim 12 above, therefore claim 2 is rejected for the same reasons as claim 12.
Regarding claim 13, Sah, as modified by Nagata and Garcia Bordes, discloses:
The device of claim 12, wherein the one or more processors are further configured to, when the distance is equal to or greater than the predetermined safety distance, repeat measuring the distance (a vehicle that monitors a distance between objects and the vehicle would necessarily repeat measuring the distance even if greater than the threat threshold and [0028] s1-2, the external object detector operates in a first elevated tier of operation and continues to track the depth and/or velocity of the threat candidate until it begins to move away from the vehicle or otherwise vanishes from the optical flow but a threat candidate may continue to move towards the vehicle and cross within a closer proximity threshold (i.e., a predetermined safety distance) of the vehicle that triggers the external object detector to a higher tier of operation, which is construed as measuring a distance between the vehicle and the respective OoI at a distance equal to or greater than a predetermined safety distance).
Claim 3 recites a method having substantially the same features of claim 13 above, therefore claim 3 is rejected for the same reasons as claim 13.
Regarding claim 14, Sah, as modified by Nagata and Garcia Bordes, discloses:
The device of claim 12, wherein:
the impact event includes at least one of a door impact event, a mirror impact event, or a door opening attempt event ([0057] determined to be an imminent high risk threat (e.g., the threat classification model has inferred that the person has initiated a high threat action such as striking the vehicle or attempting to enter the vehicle)).
Further, Garcia Bordes teaches the limitation:
the collision event includes at least one of a forward collision warning event, a forward lateral collision warning event, a rear lateral collision warning event, or a rear collision warning event, in [0004] s3-4 where the object detection device is configured to detect objects next to the vehicle, approaching the vehicle from a side, and approaching the vehicle from behind for detecting potential threats to the vehicle via collision.
Therefore it would have been obvious to one of ordinary skill in the art of facial recognition and vehicle safety controls before the effective filing date of the current invention to modify the vehicle safety control system/method of Sah, as already modified by Nagata and Garcia Bordes, by further incorporating the directional teachings of Garcia Bordes, such that as object detections are considered within Sah, directional detection is also considered.
The motivation to do so is the same as acknowledged by Garcia Bordes in regards to claim 12.
Claim 4 recites a method having substantially the same features of claim 14 above, therefore claim 4 is rejected for the same reasons as claim 14.
Regarding claim 20, Sah, as modified by Nagata and Garcia Bordes, discloses:
The device of claim 12, wherein the one or more processors are further configured to transmit a rescue request to an external system when the collision event or the impact event occurs a predetermined number of times or more, after executing the safe mode ([0058] s2 the threat alert function may be coupled to a network interface to transmit notifications via a network to a user device (e.g., such as a smart phone and/or other personal smart device) designated by an operator of vehicle to receive security notifications) which may necessarily be to an external system after an incident which is construed as a predetermined number of times (i.e., once)).
Claim 10 recites a method having substantially the same features of claim 20 above, therefore claim 10 is rejected for the same reasons as claim 20.
___________________________________________
Claim(s) 5-7 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Sah et al. (US Pat. Pub. No. 2024/0404296 A1), hereinafter referred to as Sah, in view of Nagata et al. (US Pat. No. 10,850,709 B1), hereinafter referred to as Nagata, Garcia Bordes et al. (US Pat. Pub. No. 2012/0041632 A1), and Holub et al. (US Pat. Pub. No. 2009/0290791 A1), hereinafter referred to as Holub.
Regarding claim 15, Sah, as modified by Nagata and Garcia Bordes, discloses:
The device of claim 12, wherein the one or more processors are configured to:
detect a facial region to perform facial expression recognition in an image of the driver of the neighboring vehicle or an image of the neighboring person,
but Sah, as modified by Nagata and Garcia Bordes, do not disclose:
align facial portions of the facial region;
extract a first level feature from a first region of the facial region;
extract a second level feature from a second region of the facial region that is different from the first region; and
extract a third level feature from a third region of the facial region different from the first region and the second region.
However, Holub teaches [0079] s3 teaches that the shapes are aligned with a similarity transform that enables translation, scaling, and rotation by minimizing the average Euclidean distance between shape points. Further, [0003] teaches automatically parsing and extracting meta-information from image data. Furthermore, in Fig 4, below, the three levels of features pertain to three different regions of the face. Specifically, regarding the first feature pertaining to the entire face, the second feature pertaining to a partial region comprising the eyes, and the third feature pertaining to a fine region comprising the bridge of the nose. Further in [0048], the tracking module performs template matching at the nodes of a grid and selects the candidate location that provides the best match. Lastly, in claim 4 the reference teaches potential different features to be focused upon such as eyes, upper cheeks, bridge of the nose and each of those pertaining to its own region of focus.
Therefore it would have been obvious to one of ordinary skill in the art of facial recognition and vehicle safety controls before the effective filing date of the current invention to modify the vehicle safety control system/method of Sah and the facial recognition teachings of Nagata and the object detection teachings of Garcia Bordes, by incorporating the facial recognition teachings of Holub, as acknowledged by Holub in [0036] s1, such that this allows improved accuracy of facial detection.
[AltContent: textbox (Fig. 4)]
PNG
media_image1.png
285
410
media_image1.png
Greyscale
Claim 5 recites a method having substantially the same features of claim 15 above, therefore claim 5 is rejected for the same reasons as claim 15.
Regarding claim 16, Sah, as modified by Nagata, Garcia Bordes, and Holub, discloses:
The device of claim 15, wherein the one or more processors are further configured to:
select features corresponding to a top certain percentage of the first level feature, the second level feature, and the third level feature having high classification confidence values (see claim 15 regarding selecting features of the face and [0029] s3-4 where threat classification model may compute affinity estimates (i.e., a similarity of characteristics suggesting a relationship, especially a resemblance in structure) and confidence values for features (e.g., joints of the OoI), inferring an intent of the person(s) (e.g., to whether the person(s) is about to strike or otherwise cause damage to the vehicle, reach into the vehicle, or otherwise act in a threatening behavior) based on an assessment);
associate the selected features(see claim 1 regarding claim 4 of the reference and how the features are associated with one another);
perform an emotion classification based on the associated features (see claim 1); and
determining whether the emotion is anger based on a result of the emotion classification (see claim 1).
Claim 6 recites a method having substantially the same features of claim 16 above, therefore claim 6 is rejected for the same reasons as claim 16.
Regarding claim 17, Sah, as modified by Nagata, Garcia Bordes, and Holub, discloses:
The device of claim 15, wherein:
the first level feature includes a feature extracted from an entire region of the facial region (see claim 15 and Fig. 4 above);
the second level feature includes a feature extracted from a partial region of the facial region (see claim 15 and Fig. 4 above); and
the third level feature includes a feature extracted from a fine region of the facial region (see claim 15 and Fig. 4 above).
Claim 7 recites a method having substantially the same features of claim 17 above, therefore claim 7 is rejected for the same reasons as claim 17.
Allowable Subject Matter
Claims 8 and 18 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
In particular, the claim 8 and 18 limitations, wherein when the vehicle is traveling, switch the state of the vehicle to the threatened state when it is determined that the emotion of the driver of the neighboring vehicle is anger.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see:
Lindsay et al. (US Pat. Pub. No. 2020/0238952 A1) is directed towards security systems for secured assets such as vehicles, are described in which facial recognition systems or other biometrics systems provide enhanced security for an authorized user;
Day et al. (US Pat. Pub. No. 2019/0295207 A1) is directed towards security system that utilizes facial recognition to detect threats approaching a non-moving entity (e.g., a building);
Boccuccia (US Pat. Pub. No. 2021/0291874 A1) is directed towards security system that utilizes facial recognition identifying one or more individuals in crisis, the autonomous vehicle may automatically unlock and/or open its doors to receive the distressed individual;
Johns (US Pat. Pub. No. 2022/0195751 A1) is directed towards security system that utilizes facial recognition identifying emotions of an individual and whether a threat exists sending control signals to a door of a vehicle;
Jayaweera et al. (US Pat. Pub. No. 2025/0139738 A1) is directed towards a high resolution human imaging using neural network that detects emotions and performs actions based on analysis; and
Larcher et al. (WIPO Pub. No. 2023/075746 A1). is directed towards a detection device for detecting the emotional state of a user.
Conclusion
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEITH ALLEN VON VOLKENBURG whose telephone number is (703)756-5886. The Examiner can normally be reached Monday-Friday 8:30 am-5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin D. Bishop can be reached at (571) 270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEITH A. VON VOLKENBURG/Examiner, Art Unit 3665
/Erin D Bishop/Supervisory Patent Examiner, Art Unit 3665