DETAILED ACTION
This is a response to applicant’s submissions filed on 1/06/2026. Claims 1-13 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/6/2026 has been entered.
Information Disclosure Statement
The information disclosure statements filed on 11/10/2025 and 1/30/2026 have been reviewed and considered.
Response to Arguments
Applicant’s arguments with respect to the rejections of claim(s) 1-8 and 10 under 35 U.S.C. § 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments with respect to the rejection of claim 9 under 35 U.S.C. § 103 have been fully considered but they are not persuasive.
In response to Applicant’s argument that Nagata does not teach or suggest any specific control related to a scenario where registered and non-registered users are detected (Applicant’s Remarks; p. 14), the Examiner respectfully disagrees. Nagata, in figure 6, discloses different control branches based on how many of the individuals approaching the vehicle are recognized. Nagata further discloses, in column 8, lines 10-42, that the system recognizes approaching individuals based on their registered biometric information that authorizes them to use the vehicle. See rejection below.
Claim Objections
Claims 1-2 are objected to because of the following informalities:
In claim 1, line 30, the phrase “a part configuring a face” is unclear because it does not use proper idiomatic English. The Examiner suggests the phrase would be clearer if it read “a part of a face”.
In claim 1, line 33, the phrase “a part configuring the face” is unclear because it does not use proper idiomatic English. The Examiner suggests the phrase would be clearer if it read “the part of the face”.
In claim 2, line 4, “in which build models generated” should read “in which the generated build models” to make it clear that they are the same build models generated in claim 1, line 15.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 11-12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 11, lines 2-4, the limitation “the first processor, when the predetermined motion is detected before the second authentication is completed, temporarily stores door control contents information” appears to be new matter because there does not appear to be explicit disclosure of the first processor temporarily storing door control contents information when the predetermined motion is detected before the second authentication is completed. Figures 5, 7, 9 and paragraph 99 appear to disclose both authentication and gesture detection are required before transmitting the door control contents information to the first processor, and the system will wait for a gesture to be detected, the first processor then responds by either performing or inhibiting door opening control. The limitation “the first processor performs the opening and closing control … based on door control contents information stored”, in lines 5-7, similarly appears to be new matter because it depends on the temporarily stored door control contents information. It is further noted that although the entry ECU explicitly includes the second processor and memory, the zone A-ECU is not similarly recited as including a memory for temporarily storing information.
Regarding claim 11, lines 8-9, the limitation “the first processor discards the door control contents information stored” appears to be new matter because there is no explicit disclosure of the first processor discarding information. Although figures 5 and 7-9 disclose the first processor receives the door control contents information and responds by performing or inhibiting door opening control, there does not appear to be disclosure of subsequent operations on the door control contents information. A processor does not inherently discard information after using it.
Regarding claim 12, lines 4-5, the limitation “the specific predetermined motion with which the opening and closing control is prohibited in response is a trigger including unlocking the vehicle with an electronic key” appears to be new matter because there does not appear to be disclosure of prohibiting the opening and closing control of the door in response to unlocking the vehicle with an electronic key. Paragraph 25 discloses cancelling inhibition of the execution of the door opening control when triggered by unlocking the door by the electronic key. Rather than prohibiting opening and closing control by unlocking the vehicle with the key, paragraph 25 appears to disclose cancelling the function that prevents the vehicle from opening, i.e., the electronic key overrides the system when it is used to unlock the door.
Regarding claim 12, lines 4-6, the limitation “the specific predetermined motion with which the opening and closing control is prohibited is a trigger including … an input made via a touch sensor” appears to be new matter because, although paragraphs 10-11 disclose a touch panel and a touch sensor, there does not appear to be disclosure that the touch sensor interacts with the second processor, either alone or in combination with the electronic key.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, lines 15 and 17-18, the limitation “from each of the vehicle outside image” renders the claim indefinite because the claim, in line 7, only recites acquiring a single vehicle outside image, therefore, it is unclear if the multiple build models are generated from a single, or multiple, build images. Paragraph 36 discloses a build model is generated for each vehicle outside image of the moving images captured by cameras, therefore, for the purposes of examination, it will be assumed that a plurality of images are acquired.
Regarding claim 1, line 16, the limitation “simulate a figure of a person” renders the claim indefinite because it is unclear if the simulated figure is of the vehicle outside person detected in line 9. Paragraph 29 discloses the build model is a model simulating a figure of a person by indicating the position of a head, the positions of shoulders, the positions of elbows, the position of a waist, the positions of knees, and the positions of ankles by dots and connecting the individual dots by lines. Paragraph 41 further discloses generating the build model of the vehicle outside person detected in each vehicle outside image. Therefore, for the purposes of examination, it will be assumed that the build models simulate a figure of the vehicle outside person.
Regarding claim 1, lines 16-18, the limitation “generates a first approach mode data for which a plurality of build models each generated from each of the vehicle outside image are time-sequentially lined up” renders the claim indefinite because it is unclear if the plurality of build models are the same build models generated in line 15. For the purposes of examination, it will be assumed that the claim is directed to generating a single plurality of build models of the detected vehicle outside person.
Regarding claim 1, lines 18-20, the limitation “compares a time change of the build models indicated by the first approach mode data and a time change of the build models indicated by a second approach mode data” renders the claim indefinite because it is unclear what a time change of a build model refers to, therefore, it is unclear how the generated and stored build models are compared. Paragraph 41 similarly discloses comparing time changes of the build models. For the purposes of examination, it will be assumed that the claim is directed to comparing build model changes at multiple times.
Regarding claim 1, lines 19-20, the limitation “a time change of the build models indicated by a second approach mode data” renders the claim indefinite because it is unclear if the build models indicated by the second approach mode data are the same build models generated in line 15. For the purposes of examination, it will be assumed that the claim is directed to generating a first set of build models, and comparing them to a second set of build models that are stored in the memory.
Regarding claim 1, line 25, the limitation “authenticating that the vehicle outside person detected is the registered person” renders the claim indefinite because the registered person lack sufficient antecedent basis in the claim, therefore, it is unclear if the registered person is the vehicle outside person detected in line 9, the registered user authenticated in lines 11-12, or the person that is simulated in line 16. For the purposes of examination, it will be assumed that the registered person is the vehicle outside person authenticated as a registered user.
Regarding claim 1, lines 25-27, the limitation “when the second approach mode data whose concordance rate with the first approach mode data is at a predetermined concordance rate or higher is stored in the memory” renders the claim indefinite because is unclear which of the following parameters are stored in the memory: the second approach mode data, the concordance rate of the second approach mode data with the first approach mode data, or the predetermined concordance rate. The second approach mode data is stored in the memory in advance, therefore, the authentication cannot occur simultaneously with its storage, therefore, for the purposes of examination, it will be assumed that an indication of a concordance rate meeting or exceeding a minimum concordance rate is stored in the memory.
Regarding claim 2, line 2, the limitation “the second processor generates a build model” renders the claim indefinite because it is unclear if it is one of the build models generated by the second processor in claim 1, line 15. For the purposes of examination, it will be assumed that the claims are directed to the same plurality of build models.
Regarding claim 2, lines 3-5, the limitation “generates the first approach mode data … and authenticates the second approach mode data” renders the claim indefinite because it appears to repeat the steps of generating the first approach mode data and authenticating based on the second approach mode data that are recited in claim 1, lines 15-27, therefore, it is unclear if claim 2 recites additional steps. For the purposes of examination, it will be assumed that claim 2 is drawn to the same steps of claim 1.
Regarding claim 2, lines 6-7, the limitation “a predetermined concordance rate” renders the claim indefinite because it is unclear if it is the same predetermined concordance rate recited in claim 1, line 27. For the purposes of examination, it will be assumed that both claims are directed to the same predetermined concordance rate.
Regarding claim 8, lines 12 and 14-15, the limitation “from each of the vehicle outside image” renders the claim indefinite because the claim, in line 3, only recites acquiring a single vehicle outside image, therefore, it is unclear if the multiple build models are generated from a single, or multiple, build images. Paragraph 36 discloses a build model is generated for each vehicle outside image of the moving images captured by cameras, therefore, for the purposes of examination, it will be assumed that a plurality of images are acquired.
Regarding claim 8, line 13, the limitation “simulate a figure of a person” renders the claim indefinite because it is unclear if the simulated figure is of the vehicle outside person detected in line 5. Paragraph 29 discloses the build model is a model simulating a figure of a person by indicating the position of a head, the positions of shoulders, the positions of elbows, the position of a waist, the positions of knees, and the positions of ankles by dots and connecting the individual dots by lines. Paragraph 41 further discloses generating the build model of the vehicle outside person detected in each vehicle outside image. Therefore, for the purposes of examination, it will be assumed that the build models simulate a figure of the vehicle outside person.
Regarding claim 8, lines 13-15, the limitation “generating a first approach mode data for which a plurality of build models each generated from each of the vehicle outside image are time-sequentially lined up” renders the claim indefinite because it is unclear if the plurality of build models are the same build models generated in line 12. For the purposes of examination, it will be assumed that the claim is directed to generating a single plurality of build models of the detected vehicle outside person.
Regarding claim 8, lines 16-17, the limitation “comparing a time change of the build models indicated by the first approach mode data and a time change of the build models indicated by a second approach mode data” renders the claim indefinite because it is unclear what a time change of a build model refers to, therefore, it is unclear how the generated and stored build models are compared. Paragraph 41 similarly discloses comparing time changes of the build models. For the purposes of examination, it will be assumed that the claim is directed to comparing build model changes at multiple times.
Regarding claim 8, lines 16-17, the limitation “a time change of the build models indicated by a second approach mode data” renders the claim indefinite because it is unclear if the build models indicated by the second approach mode data are the same build models generated in line 12. For the purposes of examination, it will be assumed that the claim is directed to generating a first set of build models, and comparing them to a second set of build models that are stored in the memory.
Regarding claim 8, lines 22-23, the limitation “authenticating that the vehicle outside person detected is the registered person” renders the claim indefinite because the registered person lack sufficient antecedent basis in the claim, therefore, it is unclear if the registered person is the vehicle outside person detected in line 5, the registered user authenticated in lines 8-9, or the person that is simulated in line 13. For the purposes of examination, it will be assumed that the registered person is the vehicle outside person authenticated as a registered user.
Regarding claim 8, lines 23-25, the limitation “when the second approach mode data whose concordance rate with the first approach mode data is at a predetermined concordance rate or higher is stored in the memory” renders the claim indefinite because is unclear which of the following parameters are stored in the memory: the second approach mode data, the concordance rate of the second approach mode data with the first approach mode data, or the predetermined concordance rate. The second approach mode data is stored in the memory in advance, therefore, the authentication cannot occur simultaneously with its storage, therefore, for the purposes of examination, it will be assumed that an indication of a concordance rate meeting or exceeding a minimum concordance rate is stored in the memory.
Regarding claim 8, line 35, the limitation “a predetermined motion is detected” renders the claim indefinite because it is unclear if it is the same predetermined motion of the vehicle outside person detected in line 5. For the purposes of examination, it will be assumed that the claim is directed to detecting the predetermined motion of the person outside the vehicle one time.
Regarding claim 9, lines 28-30, the limitation “the first mode includes one of unlocking all doors or unlocking only certain doors, the second mode includes one of unlocking all doors or unlocking only certain doors” renders the claim indefinite because it is unclear how the modes are different if they can both unlock the same doors. Paragraphs 91 and 35 similarly disclose performing the door opening control in a first or second mode that can be made different, according to whether or not the non-registered user is present with the registered user, without specifying any other differences. For the purposes of examination, it will be assumed that the modes refer to any instructions following a determination of whether at least one registered user are detected, or a registered user and a non-registered user.
Regarding claim 10, lines 11-12, the limitation “detects gestures, to the vehicle, of the vehicle outside person detected” renders the claim indefinite because detecting a plurality of vehicle outside persons lacks sufficient antecedent basis in the claim, therefore, it is unclear if how the vehicle outside person are detected, and if they include the vehicle outside person detected in line 9. For the purposes of examination, it will be assumed that the claim is directed to detecting more than one person outside the vehicle based on the acquired image.
Regarding claim 10, lines 13-14, the limitation “based on a predetermined part of the vehicle outside persons” renders the claim indefinite because it is unclear if the authentication is based on one part of one person in the group of people outside the vehicle, one part on each of the people in the group, or simply a subset of the group of people outside the vehicle. For the purposes of examination, it will be assumed that each person outside the vehicle is authenticated based on a predetermined part of their body (e.g., their face).
Regarding claim 11, lines 5-7, the limitation “the first processor performs the opening and closing control … based on door control contents information stored” renders the claim indefinite because there does not appear to be explicit disclosure of the first processor temporarily storing door control contents information when the predetermined motion is detected before the second authentication is completed, therefore, it is unclear how the first person performs the opening and closing control of the door based on said temporarily stored information. Figures 5, 7, 9, and paragraph 99 appear to disclose both authentication and gesture detection are required before transmitting the door control contents information to the first processor, and the system will wait for a gesture to be detected, the first processor then responds by either performing or inhibiting door opening control. For the purposes of examination, it will be assumed that claim 11 is generally directed to asynchronously performing the authentication and gesture recognition in any order.
Regarding claim 12, lines 4-5, the limitation “the specific predetermined motion with which the opening and closing control is prohibited in response is a trigger including unlocking the vehicle with an electronic key” renders the claim indefinite because it unclear how unlocking the vehicle with an electronic key prohibits opening and closing control. Paragraph 25 discloses cancelling inhibition of the execution of the door opening control when triggered by unlocking the door by the electronic key. Rather than prohibiting opening and closing control by unlocking the vehicle with the key, paragraph 25 appears to disclose cancelling the function that prevents the vehicle from opening, i.e., the electronic key overrides the system when it is used to unlock the door. For the purposes of examination, it will be assumed that the second processor cancels inhibiting executing door opening control in response to unlocking the vehicle with an electronic key.
Regarding claim 12, lines 4-6, the limitation “the specific predetermined motion with which the opening and closing control is prohibited is a trigger including … an input made via a touch sensor” renders the claim indefinite because it unclear how a touch sensor is used in combination with an electronic key to prohibit opening and closing control of the door. Although paragraphs 10-11 disclose a touch panel and a touch sensor, there does not appear to be disclosure that the touch sensor interacts with the second processor, either alone or in combination with the electronic key. Paragraph 13 further discloses the entry ECU processes user access from the outside of the vehicle, therefore, it is further unclear how the touch sensor, that is inside of the vehicle, is used in combination with the electronic key, that is with a user outside of the vehicle, to prohibit opening and closing control. For the purposes of examination, it will be assumed that either the electronic key or the touch sensor is used to inhibit executing door opening control.
Claims 2-7 and 11-13 are rejected as being dependent on a rejected claim and for failing to cure the deficiencies listed above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 8 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita et al. (US 2022/0339993) in view of Jones et al. (US 2016/0300410), Chauhan et al. (US 2024/0135930), hereinafter Sugita 1, Jones, and Chauhan, respectively, and Park (US 2023/0184024).
Regarding claims 1 and 8, as best understood, Sugita 1 discloses a control system which is a vehicle control system (Sugita 1; para. 22: a vehicle door opening/closing device for controlling opening and closing of a vehicle door is described as an example of a vehicle control system), comprising: a zone A-ECU (Sugita 1; fig. 1: door control ECU 14) including a first processor (Sugita 1; para. 25: door control ECU 14 is similarly configured by a microcomputer including a CPU 14) for performing opening and closing control of a door of a vehicle (Sugita 1; para. 31: actuator 24 opens and closes a vehicle door 30 (see FIG. 3) at a vehicle rear section under the control of the door control ECU 14); and an entry ECU (Sugita 1; fig. 1: authentication ECU 12) including a second processor and a memory (Sugita 1; para. 24: authentication ECU 12 is configured by a microcomputer including a central processing unit (CPU) 12A, read only memory (ROM) 12B, random access memory (RAM) 12C, and so on).
Sugita 1 does not explicitly disclose the second processor acquires a vehicle outside image which is an image captured by a camera that captures an image outside the vehicle, detects a vehicle outside person present outside the vehicle based on the vehicle outside image acquired; acquired, and performs authentication as to whether or not the vehicle outside person is a registered user by two-factor authentication of a first authentication and a second authentication, in the two-factor authentication, the second processor generates, from each of the vehicle outside image, build models of the vehicle outside person detected which simulate a figure of a person, generates a first approach mode data for which a plurality of build models each generated from each of the vehicle outside image are time-sequentially lined up, compares a time change of the build models indicated by the first approach mode data and a time change of the build models indicated by a second approach mode data stored in the memory in advance, and performs the second authentication of generating generated approach mode data for the vehicle outside person detected, and authenticating whether the memory stores stored approach mode data whose concordance rate with the generated approach mode data is at a predetermined concordance rate or higher; authenticating that the vehicle outside person detected is the registered person when the second approach mode data whose concordance rate with the first approach mode data is at a predetermined concordance rate or higher is stored in the memory, and performs the first authentication of extracting, from the vehicle outside image, an image of a part configuring a face of the vehicle outside person authenticated by the second authentication, and authenticating whether or not the vehicle outside person authenticated by the second authentication is the registered user by comparing the image of the part configuring the face extracted and a face data of the registered user recorded in the memory in advance ,the second processor extracts a characteristic of a motion, of the vehicle outside person authenticated by the two-factor authentication, from the vehicle outside image, and determines that a predetermined motion is detected when a motion content recognized from the characteristic of the motion extracted is a motion content recorded in the memory in advance, and when the predetermined motion detected is the predetermined motion of the vehicle outside person authenticated as the registered user by the two-factor authentication, transmits, to the first processor, first door control contents information indicating control contents related to opening the door of the vehicle, the first door control contents information is recorded, in the memory in advance, in association with the predetermined motion detected, and the first processor performs the opening and closing control of the door of the vehicle based on the first door control contents information received.
Jones, in the same field of endeavor (vehicle entry controls), discloses a processor acquires a vehicle outside image which is an image captured by a camera that captures an image outside the vehicle (Jones; para. 22: front and rear cameras 20, 22 are also configured to capture still image data from the scene surrounding the vehicle. The image data from the front and rear cameras 20, 22 is transmitted to the vehicle door access system 24), detects a vehicle outside person present outside the vehicle based on the vehicle outside image acquired, and performs authentication as to whether or not the vehicle outside person is a registered user by two-factor authentication of a first authentication and a second authentication (Jones; para. 27: The still image comparator module 34 compares images captured from the cameras 20, 22 with the stored still image identifier. In a similar fashion, the moving image comparator module 36 compares moving images captured from the cameras 20, 22 with the stored moving image identifier. In the event that there is correspondence between the still image and the stored still image, and between the stored moving image and captured moving image, an authorisation signal 41 is output to a door lock control module 40 to control activation of a door lock 42 to permit user access to the vehicle.), in the two-factor authentication, the processor performs the second authentication of authenticating that the vehicle outside person detected is the registered person when second approach mode data whose concordance rate with first approach mode data is at a predetermined concordance rate or higher is stored in memory (Jones; para. 32: the image data relating to the user's gait is passed to the moving image comparator module 36. The moving comparator module 36 retrieves the pre-recorded moving image identifier for the authorised user from the memory 38 and compares this with the captured moving image using an image recognition comparison algorithm. If there is a correspondence, the moving image comparator module 36 sends a signal 41 to the door lock control module 40 to indicate that the moving image matches the authorised moving image identifier), and performs the first authentication of extracting, from the vehicle outside image, an image of a part configuring a face of the vehicle outside person authenticated by the second authentication (Jones; para. 33: Simultaneously or near simultaneously with the procedure for comparing of the moving image with the moving image identifier, the camera (20 or 22) records a still image of the user's face as they approach the vehicle … The still image that is recorded (from either the front camera 20 or the rear camera 22, depending on the direction of approach) is transmitted to the camera control module 30 where it undergoes image processing to remove background clutter from the image scene, as discussed above. The image data is then passed to the still image comparator module 34 which retrieves the still image identifier from the memory 38 for comparison purposes), and authenticating whether or not the vehicle outside person authenticated by the second authentication is the registered user by comparing the image of the part configuring the face extracted and a face data of the registered user recorded in the memory in advance (Jones; para. 34: still image comparator module 34 is provided with a facial recognition algorithm which looks for a correspondence between the recorded still image and the stored image identifier. If the facial recognition software determines that there is a correspondence between the recorded still image and the still image identifier, the still image comparator module 34 sends a signal to the door lock control module 40 to indicate that the still image matches the authorised still image identifier), the processor transmits first door control contents information indicating control contents related to opening the door of the vehicle, and performs the opening control of the door of the vehicle based on the first door control contents information received (Jones; para. 34: door lock control module 40 transmits a signal 41 to the door lock 42 to unlock the vehicle door; para. 40: as soon as the door is unlocked, a force is applied to the door 12 to urge the door open).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the authentication ECU of Sugita 1 to transmit door control signals for one or more doors, in response to authenticating, based on exterior camera images, both the motion of an approaching person and their face, as disclosed by Jones, and to have further modified the door opening and closing control of the door control ECU of Sugita 1 to be performed in response to receiving the door control signals, as further disclosed by Jones, with the motivation of providing a more robust and higher security door access system for the vehicle (Jones; para. 31).
Sugita 1, as modified, does not explicitly disclose the second processor extracts a characteristic of a motion, of the vehicle outside person authenticated by the two-factor authentication, from the vehicle outside image, and determines that a predetermined motion is detected when a motion content recognized from the characteristic of the motion extracted is a motion content recorded in the memory in advance, and when the predetermined motion detected is the predetermined motion of the vehicle outside person authenticated as the registered user by the two-factor authentication transmit, to the first processor, the first door control contents information.
Park, in the same field of endeavor (vehicle entry controls), discloses extracting a characteristic of a motion (Park; para. 86: controller 140 may detect a gesture of a first pattern in which the user shakes a head thereof based on consecutive first to third images IMG1 to IMG3. Alternatively, the controller 140 may detect a gesture of a second pattern in which the user nods the head thereof based on consecutive fourth to sixth images IMG4 to IMG6), of a vehicle outside person authenticated from the vehicle outside image (Park; para. 122: In S1040, the controller 140 may perform additional user authentication. The additional user authentication may be an operation of supplementing the user authentication performed based on the built-in cam BC in S1010.), and determines that a predetermined motion is detected when a motion content recognized from the characteristic of the motion extracted is a motion content recorded in the memory in advance (Park; para. 85: controller 140 may perform the user authentication by determining a gesture of the object ob and determining whether the gesture of the object corresponds to a pre-stored pattern), and when the predetermined motion detected is the predetermined motion of the vehicle outside person authenticated as the registered user, transmitting door control information (Park; fig. 10: S1050; para. 79: controller 140 may control the door driver 160 to open a door corresponding to a position closest to the position of the user).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified, in addition to the two-factor authentication and transmitting the door opening and closing contents information, the authentication ECU of Sugita 1, as modified, to require the user to perform a gesture before opening or closing the doors, as disclosed by Park, to yield the predictable result of further increasing the security of accessing the vehicle.
As discussed above, Park discloses the user’s gesture is authenticated by comparing it to a pre-stored pattern, and that the door corresponding to the position of the user is opened in response, however, Park does not appear to explicitly disclose the first door control contents information is recorded, in the memory in advance, in association with the predetermined motion detected.
However, Jones further discloses door control contents information is recorded, in a memory in advance, in association with a predetermined motion detected (Jones; para. 38: if the user approaches and raises a single finger as a gesture, this may be linked to the opening of one of the vehicle doors, whereas the user raising two fingers, or a whole hand, may be linked to the opening of two or more vehicle doors. In this case the moving images are captured and compared with two or more stored user identifiers, each of which corresponds to a different door unlock sequence).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have identified different stored gestures that correspond with different door unlock sequences, as disclosed by Jones, in the authentication ECU of Sugita 1, as modified, to yield the predictable result of providing the user with multiple door control options to use in different vehicle approach scenarios.
Sugita 1, as modified, does not explicitly disclose the second processor generates, from each of the vehicle outside image, build models of the vehicle outside person detected which simulate a figure of a person, generates a first approach mode data for which a plurality of build models each generated from each of the vehicle outside image are time-sequentially lined up, compares a time change of the build models indicated by the first approach mode data and a time change of the build models indicated by a second approach mode data stored in the memory in advance.
Chauhan, in a reasonably pertinent field of endeavor (building and vehicle access controls), discloses generating, from each of a plurality of images (Chauhan; para. 176: At block 808, if a subsequent frame is detected, then the process can repeat), build models of a person detected which simulate a figure of a person (Chauhan; para. 176: process 800 uses a “bottom-up” approach to pose extraction by performing object detection on a frame to determine the presence of a user at block 802, extracting the skeleton of the user at block 804, and detecting a pose from the extracted skeleton at block 806); generating a first approach mode data for which a plurality of build models each generated from each of the images are time-sequentially lined up (Chauhan; para. 176: external behavior data for the user is extracted by a tracking process across frames); and comparing a time change of the build models indicated by the first approach mode data and a time change of the build models indicated by a second approach mode data stored in the memory in advance (Chauhan; para. 177: over the course of a plurality of video frames a user may perform actions that result in a skeleton corresponding to the user undergoing a series of poses during those video frames. These poses may be compared with poses from other external behavior data of the user and other users … To do so, ML models may be trained using a positive dataset (e.g., videos of a user exhibiting the external behavior) and a negative dataset (e.g., videos of the user not exhibiting during the external behavior, videos of other users exhibiting a similar behavior, etc.). For example, a gait recognition model can be trained using a positive data set of videos of the user walking and a negative dataset of videos of the user not walking and/or videos of other users (e.g., walking or not walking)).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the concordance determination of the moving image comparator module of Sugita 1, as modified, to compare a sequence of detected skeleton poses of a person to pose data in pre-stored videos, as disclosed by Chauhan, with the motivation of accurately identifying the user from external behavior data (Chauhan; para. 186).
Regarding claim 2, as best understood, Sugita 1, as modified, discloses in the second authentication, the second processor generates a build model of the vehicle outside person detected from the vehicle outside image (Chauhan; para. 176: process 800 uses a “bottom-up” approach to pose extraction by performing object detection on a frame to determine the presence of a user at block 802, extracting the skeleton of the user at block 804, and detecting a pose from the extracted skeleton at block 806), generates the first approach mode data in which build models generated are time-sequentially lined up, and authenticates the stored second approach mode data whose concordance rate with time change of the build model generated is at a predetermined concordance rate or higher (Chauhan; para. 177: over the course of a plurality of video frames a user may perform actions that result in a skeleton corresponding to the user undergoing a series of poses during those video frames. These poses may be compared with poses from other external behavior data of the user and other users … To do so, ML models may be trained using a positive dataset (e.g., videos of a user exhibiting the external behavior) and a negative dataset (e.g., videos of the user not exhibiting during the external behavior, videos of other users exhibiting a similar behavior, etc.). For example, a gait recognition model can be trained using a positive data set of videos of the user walking and a negative dataset of videos of the user not walking and/or videos of other users (e.g., walking or not walking); Jones; para. 32: the image data relating to the user's gait is passed to the moving image comparator module 36. The moving comparator module 36 retrieves the pre-recorded moving image identifier for the authorised user from the memory 38 and compares this with the captured moving image using an image recognition comparison algorithm. If there is a correspondence, the moving image comparator module 36 sends a signal 41 to the door lock control module 40 to indicate that the moving image matches the authorised moving image identifier).
Regarding claim 13, as best understood, Sugita 1, as modified, discloses the invention substantially as claimed as described above.
Sugita 1, as modified, does not explicitly disclose when the second authentication using the first approach mode data is completed successfully by the second processor, the first processor activates a lamp or a display on the vehicle to light to notify the approach mode authentication is completed successfully.
Park discloses when authentication is completed successfully by a processor, the processor activates a lamp or a display on a vehicle to light to notify the approach mode authentication is completed successfully (Park; para. 90: controller 140 may notify that the user authentication is successful using the light emission of the lamp 151; para. 123: controller 140 may request the additional [i.e., after completing the user authentication based on the built-in cam] user authentication using the light emission of the lamp 151).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the entry ECU of Sugita 1, as modified, to flash the vehicle lights in response to completing each authentication, as disclosed by Park, to yield the predictable result of notifying the approaching people of the vehicle’s status.
Claim(s) 3 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita 1 in view of Jones, Chauhan and Park as applied to claim 1 above, and further in view of Nagata et al. (US 10,850,709), hereinafter Nagata.
Regarding claim 3, as best understood, Sugita 1, as modified, discloses a plurality of door control contents information are recorded in the memory in advance (Jones; para. 38: if the user approaches and raises a single finger as a gesture, this may be linked to the opening of one of the vehicle doors, whereas the user raising two fingers, or a whole hand, may be linked to the opening of two or more vehicle doors. In this case the moving images are captured and compared with two or more stored user identifiers, each of which corresponds to a different door unlock sequence).
Sugita 1, as modified, does not explicitly disclose the second processor, when a plurality of vehicle outside persons including a non-registered user who is not the registered user and the registered user are detected, transmits, to the first processor, second door control contents information indicating control contents related to opening and closing of the door of the vehicle when the non-registered user is with the registered user, the second door control contents information is recorded in the memory in advance.
Nagata, in the same field of endeavor (vehicle access controls), discloses when a plurality of vehicle outside persons including a non-registered user and a registered user are detected (Nagata; col. 8, ll. 14-20: one or more of a variety of sensors can be used to detect approaching individuals. At operation 214, images of the approaching individuals may be collected, and the system can use facial recognition or other image-analysis-based recognition techniques to determine whether the system recognizes one or more of the approaching individuals), transmitting door control information related to opening and closing of a door of a vehicle when the non-registered user is with the registered user (Nagata; col. 8, ll. 43-47: Where the system does not recognize all of the approaching individuals, the system can perform further analysis on the approaching group to determine whether it should unlock the vehicle and, if so, to what extent it should unlock the vehicle.).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the authentication ECU of Sugita 1, as modified, to transmit instructions to selectively open one or more doors when authorized and unauthorized people are detected approaching the vehicle, as disclosed by Nagata, to yield the predictable result of automatically opening the door for each passenger.
Regarding claim 12, as best understood, Sugita 1, as modified, discloses the second processor performs opening and closing control of the door (Sugita 1; para. 35: door control ECU 14 controls the actuator 24 so as to perform closing of the vehicle door 30 based on … a switch operation of the electronic key 32) in response to detecting a specific predetermined motion, and the specific predetermined motion is a trigger including unlocking the vehicle with an electronic key and an input made via a touch sensor (Sugita 1; para. 34: a switch to instruct opening and closing of the vehicle door 30 is provided to the electronic key 32, and an instruction to open or close the vehicle door 30 may be issued by operating this switch).
Sugita 1, as modified, does not explicitly disclose prohibiting the opening and closing control of the door in response to detecting the specific predetermined motion
Nagata discloses prohibiting opening and closing control of a door in response to detecting signals from an electronic key (Nagata; col. 9, ll. 48-57: the system may be configured to also recognize other signals provided by recognized individuals to confirm or override the system's access decision. In some implementations, the system can employ gesture recognition to enable recognized individuals to communicate with the vehicle using various gestures, including innocuous gestures that might not be recognized by hostile parties. The system may also be configured such that signals from a key fob operated by a recognized user may override the system decision).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the door control performed by the authentication ECU of Sugita 1, as modified, to be overridden by inputs from an electronic key, as disclosed by Nagata, to yield the predictable result of allowing the user to directly control access to the vehicle.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita 1 in view of Jones, Chauhan, Park and Nagata as applied to claim 3 above, and further in view of Ding et al. (CN 112389458), hereinafter Ding.
Regarding claim 4, as best understood, Sugita 1, as modified, discloses the second processor transmitting, to the first processor, the first door control contents information based on the predetermined motion of the vehicle outside person authenticated as the registered user by the two-factor authentication (Jones; para. 27: The still image comparator module 34 compares images captured from the cameras 20, 22 with the stored still image identifier. In a similar fashion, the moving image comparator module 36 compares moving images captured from the cameras 20, 22 with the stored moving image identifier. In the event that there is correspondence between the still image and the stored still image, and between the stored moving image and captured moving image, an authorisation signal 41 is output to a door lock control module 40 to control activation of a door lock 42 to permit user access to the vehicle.), and a plurality of door control contents information are recorded in the memory in advance (Jones; para. 38: if the user approaches and raises a single finger as a gesture, this may be linked to the opening of one of the vehicle doors, whereas the user raising two fingers, or a whole hand, may be linked to the opening of two or more vehicle doors. In this case the moving images are captured and compared with two or more stored user identifiers, each of which corresponds to a different door unlock sequence).
Sugita 1, as modified, does not explicitly disclose when a predetermined motion of the non-registered user is detected after transmitting the first door control contents information, transmits, to the first processor, third door control contents information in association with the predetermined motion of the non-registered user, that indicates control contents related to opening and closing of the door of the vehicle, and that indicates control contents in case the predetermined motion is a motion by the non-registered user.
Ding, in the same field of endeavor (automotive controls), discloses when a predetermined motion of a passenger is detected (Ding; para. 70: the driver or passenger can interact with the holographic interaction system 400 using specific air gesture commands) transmitting door control information in association with the predetermined motion of the passenger, that indicates control contents related to opening and closing of the door of the vehicle, and that indicates control contents in case the predetermined motion is a motion by the passenger (Ding; para. 71: Occupants can manually operate, for example, a model car door in the interactive area. When an occupant's hand approaches the model car door, the position sensing device 4012 of the figurative interactive device 401 detects the occupant's movement. When the occupant's finger approaches or enters the operable area, the position sensor 4012 feeds the operation command obtained from the holographic projection model 4010 to the holographic projector 4011, and the holographic projector 4011 sends it to the controller 402. After receiving the instruction, the controller 402 controls the door by driving the actuator 404 of the vehicle 100—for example, closing the door.).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified, after the unregistered user has entered the vehicle, the authentication ECU of Sugita 1, as modified, to close their door in response to their gesture, as disclosed by Ding, to yield the predictable result of preparing the vehicle for travel.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita 1 in view of Jones, Chauhan and Park as applied to claim 2 above, and further in view of Salter et al. (US 2023/0061499), hereinafter Salter 1.
Regarding claim 5, as best understood, Sugita 1, as modified, discloses the predetermined motion is a gesture to the vehicle (Park; para. 85: controller 140 may perform the user authentication by determining a gesture of the object ob and determining whether the gesture of the object corresponds to a pre-stored pattern).
Sugita 1, as modified, does not explicitly disclose the second processor transmits, to the first processor, door opening control inhibition information indicating control contents for inhibiting execution of the door opening and closing control of the door of the vehicle, when the gesture detected is the gesture of covering a face with a hand.
Salter 1, in the same field of endeavor (vehicle access/security systems), discloses transmitting door opening control inhibition information indicating control contents for inhibiting execution of door opening and closing control of a door of a vehicle, when a gesture detected is a gesture of covering a face with a hand (Salter 1; para. 86, ll. 1-15: the individual 125 may indicate to the vehicle entry authorization system 116 that he/she does not intend to enter the vehicle 115. The indication may be provided in various forms such as, for example, by a spoken command (“No!”, for example), a gesture (a head shake or a wave of a hand); the gesture of waving a hand in front of the face would be recognized by one of ordinary skill in the art to correspond to an indication of “no” in Japan, see ‘Japanese Gestures and Body Language You Need to Know’ https://www.japanesepod101.com/blog/2019/08/16/japanese-body-gestures/).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the authentication ECU of Sugita 1, as modified, to inhibit door control in response to a “no” gesture, as disclosed by Salter 1, with the motivation of avoiding inadvertently unlocking or opening the doors of the vehicle when a driver does not intend to enter the vehicle thereby increasing convenience and security (Salter 1; para. 1, ll. 8-11).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita 1 in view of Jones, Chauhan and Park as applied to claim 1 above, and further in view of Van Wiemeersch et al. (US 2023/0174018), hereinafter Van Wiemeersch.
Regarding claim 6, as best understood, Sugita 1, as modified, discloses the second processor transmits, to the first processor, the first door control contents information in association with the predetermined motion of the registered user (Jones; para. 27: The still image comparator module 34 compares images captured from the cameras 20, 22 with the stored still image identifier. In a similar fashion, the moving image comparator module 36 compares moving images captured from the cameras 20, 22 with the stored moving image identifier. In the event that there is correspondence between the still image and the stored still image, and between the stored moving image and captured moving image, an authorisation signal 41 is output to a door lock control module 40 to control activation of a door lock 42 to permit user access to the vehicle.) located at a position close to a driver's seat of the vehicle (Jones; para. 41: Front and rear cameras 20, 22 may be fitted on each side of the vehicle so that access to the vehicle may be via either the driver-side door or the passenger-side door. Alternatively, the cameras may be fitted only on the driver-side door with access via the passenger door being permitted only by means of conventional entry; para. 37: cameras 20, 22 may be angled so as to capture an image of a user who is standing right next to the vehicle, and in which case only a single camera may be required).
Sugita 1, as modified, does not explicitly disclose the registered user is at a position closest to a driver's seat of the vehicle when detecting a plurality of registered users including the registered user and a predetermined motion by each of the plurality of registered users.
Van Wiemeersch, in the same field of endeavor (vehicle access controls), discloses opening a door when a registered user is at a position closest to a driver's seat of a vehicle (Van Wiemeersch; fig. 2B: authorized user 60 enters door detection zone 52A) when detecting a plurality of registered users including the registered user and a predetermined motion by each of the plurality of registered users (Van Wiemeersch; para. 60: If authorized faces are detected within a door detection zone, routine 100 proceeds to step 130 to open the door closure for the authorized user using a face recognition or recognition gait signature for the user at the corresponding door detection boundary zone).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified, in response to a gesture of an authorized user near the driver’s door, controlling opening the driver’s door in the authentication ECU of Sugita 1, as modified, to open the driver’s door for the closest registered user when other registered users are detected, as disclosed by Van Wiemeersch, to yield the predictable result of allowing the driver to enter the vehicle.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita 1 in view of Jones, Chauhan and Park as applied to claim 1 above, and further in view of Salter et al. (US 2017/0044816) and Elie et al. (US 2017/0032599), hereinafter Salter 2 and Elie, respectively.
Regarding claim 7, as best understood, Sugita 1, as modified, discloses a plurality of door control contents information are recorded in the memory in advance (Jones; para. 38: if the user approaches and raises a single finger as a gesture, this may be linked to the opening of one of the vehicle doors, whereas the user raising two fingers, or a whole hand, may be linked to the opening of two or more vehicle doors. In this case the moving images are captured and compared with two or more stored user identifiers, each of which corresponds to a different door unlock sequence).
Sugita 1, as modified, does not explicitly disclose the second processor detects a get-off person who has gotten off the vehicle based on the vehicle outside image, detects a predetermined motion of the get-off person, and transmits, to the first processor, fourth door control contents information indicating control contents related to closing the door of the vehicle, the fourth door control contents information is recorded in the memory in advance in association with the predetermined motion, by the get-off person, detected.
Salter 2, in the same field of endeavor (vehicle access/security systems), discloses a processor detects a get-off person who has gotten off a vehicle based on a vehicle outside image (Salter 2, para. 24: determine if a person such as a driver or front seated passenger has exited the vehicle and if the person is moving rearward toward the back of the vehicle. A person may be detected exiting the vehicle by sensing an occupant departing the seating area with one or more occupant detection sensors and detecting an occupant stepping outside of the vehicle by detecting the occupant with the front door proximity sensors 30A and 30B or 30J and 30K with the corresponding side door in the open position).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the authentication ECU of Sugita 1, as modified, to recognize a person who has exited the vehicle, as disclosed by Salter 2, to yield the predictable result of calculating the vehicle’s occupancy.
Sugita 1, as modified, does not explicitly disclose detecting a predetermined motion of the get-off person, and transmitting, to the first processor, fourth door control contents information indicating control contents related to closing the door of the vehicle, the fourth door control contents information is recorded in the memory in advance in association with the predetermined motion, by the get-off person, detected
Elie, in the same field of endeavor (vehicle access/security systems), discloses a processor detects a predetermined motion of a person and transmits door control information indicating control contents related to closing a door of a vehicle, the door control contents information is recorded in a memory in advance in association with the predetermined motion, by the person, detected (Elie; para. 73: the controller 70 is configured to activate the door assist system 12 such that the door 14 opens, closes, or is repositioned in accordance with the particular gesture identified).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified, after detecting a person who has exited the vehicle, the door authentication ECU of Sugita 1, as modified, to close a door in response to a gesture of the person, as disclosed by Elie, with the motivation of assisting a user when accessing the vehicle thereby improving operation of a vehicle (Elie; para. 2).
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita 1 in view of Nagata and Sugita (US 2017/0267211), hereinafter Sugita 2.
Regarding claim 9, as best understood, Sugita 1 discloses a control system which is a vehicle control system (Sugita 1; para. 22: a vehicle door opening/closing device for controlling opening and closing of a vehicle door is described as an example of a vehicle control system), comprising: a zone A-ECU (Sugita 1; fig. 1: door control ECU 14) including a first processor (Sugita 1; para. 25: door control ECU 14 is similarly configured by a microcomputer including a CPU 14) for performing opening and closing control of a door of a vehicle (Sugita 1; para. 31: actuator 24 opens and closes a vehicle door 30 (see FIG. 3) at a vehicle rear section under the control of the door control ECU 14); and an entry ECU (Sugita 1; fig. 1: authentication ECU 12) including a second processor and a memory (Sugita 1; para. 24: authentication ECU 12 is configured by a microcomputer including a central processing unit (CPU) 12A, read only memory (ROM) 12B, random access memory (RAM) 12C, and so on).
Sugita 1 does not explicitly disclose the second processor acquires a vehicle outside image which is an image captured by a camera that captures an image outside the vehicle, detects a vehicle outside person present outside the vehicle based on the vehicle outside image acquired, detects a gesture, to the vehicle, of the vehicle outside person detected, and authenticates, based on a predetermined part of the vehicle outside person detected, whether or not the vehicle outside person detected is a registered user who is a user of the vehicle registered in advance, the first processor when the gesture detected by the second processor is the gesture of the vehicle outside person authenticated as the registered user, performs a door opening control in a first mode based on the gesture detected, when the second processor detects a plurality of vehicle outside users including a non-registered user who is a user of the vehicle not registered as the registered user and the registered user, performs the door opening control in a second mode; mode different from when only the registered user is detected, and when the gesture of the non-registered user is detected after performing the door opening control based on the gesture of the vehicle outside person authenticated as the registered user, performs a door opening control based on the gesture of the non-registered user, wherein the first mode includes one of unlocking all doors or unlocking only certain doors, the second mode includes one of unlocking all doors or unlocking only certain doors.
Nagata discloses a processor acquires a vehicle outside image (Nagata; col. 12, ll. 55-57: sensors 334 provide data to vehicle access control system 340, which uses this information to determine whether individuals are approaching) which is an image captured by a camera that captures an image outside the vehicle (Nagata; col. 11, ll. 25-27: a side mounted image sensor 64 used to detect approaching individuals and to capture images of detected approaching individuals), detects a vehicle outside person present outside the vehicle based on the vehicle outside image acquired (Nagata; col. 8, ll. 12-15: at operation 212 the system detects individuals approaching the subject vehicle. For example, one or more of a variety of sensors can be used to detect approaching individuals), detects a gesture, to the vehicle, of the vehicle outside person detected (Nagata; col. 9, ll. 51-53: the system can employ gesture recognition to enable recognized individuals to communicate with the vehicle using various gestures), and authenticates, based on a predetermined part of the vehicle outside person detected, whether or not the vehicle outside person detected is a registered user (Nagata; col. 8, ll. 15-20: At operation 214, images of the approaching individuals may be collected, and the system can use facial recognition or other image-analysis-based recognition techniques to determine whether the system recognizes one or more of the approaching individuals.) who is a user of the vehicle registered in advance (Nagata; col. 8, ll. 29-32: authorized persons may register using a smart phone app, website, or other like access application to register as authorized operators or passengers), the processor when the gesture detected is the gesture of the vehicle outside person authenticated as the registered user, performs a door opening control in a first mode based on the gesture detected (Nagata; col. 4, ll. 18-22: When an individual operator approaches the vehicle and that individual is recognized by the vehicle as an authorized driver for that vehicle, the vehicle might be configured to only unlock and open the driver's door to allow driver access; col. 9; ll. 48-53: the system may be configured to also recognize other signals provided by recognized individuals to confirm or override the system's access decision. In some implementations, the system can employ gesture recognition to enable recognized individuals to communicate with the vehicle using various gestures), when the processor detects a plurality of vehicle outside users including a non-registered user who is a user of the vehicle not registered as the registered user and the registered user, performs the door opening control in a second mode different from when only the registered user is detected (Nagata; col. 8, ll. 43-47: Where the system does not recognize all of the approaching individuals, the system can perform further analysis on the approaching group to determine whether it should unlock the vehicle and, if so, to what extent it should unlock the vehicle.), wherein the first mode includes one of unlocking all doors or unlocking only certain doors (Nagata; col. 4, ll. 18-22: When an individual operator approaches the vehicle and that individual is recognized by the vehicle as an authorized driver for that vehicle, the vehicle might be configured to only unlock and open the driver's door to allow driver access.), the second mode includes one of unlocking all doors (Nagata; col. 4, ll. 36-45: image analysis might be used to evaluate the interaction between approaching individuals to determine whether accommodation should be made for otherwise unrecognized individuals. For example, the vehicle may detect a known operator approaching the vehicle with 3 other adults. The adults may be talking and chatting with one another in a collegial manner. (As opposed to ignoring one another) in which case, the vehicle may interpret that these 4 adults are about to get into the vehicle together and that all 4 doors should be unlocked and opened) or unlocking only certain doors.
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the authentication ECU of Sugita 1 to transmit door control signals for one or more doors, in response to authenticating, based on exterior camera images, both the motion (e.g., a gesture) and face of one or more approaching people, as disclosed by Nagata, and to have further modified the door opening and closing control of the door control ECU of Sugita 1 to be performed in response to receiving the door control signals, as further disclosed by Nagata, with the motivation of providing enhanced security for vehicles and their operators (Nagata; col. 1, ll. 36-38).
Sugita 1, as modified, does not explicitly disclose when the gesture of the non-registered user is detected after performing the door opening control based on the gesture of the vehicle outside person authenticated as the registered user, performing a door opening control based on the gesture of the non-registered user.
Sugita 2, in the same field of endeavor (vehicle entry systems), discloses when a gesture of a passenger (Sugita; para. 35: a person other than the operator (who carries the portable device 11), such as a passenger, performs a passenger's door unlock operation; para. 84: a passenger's door unlock operation is detected using the touch sensor or the mechanical switch) is detected after performing door opening control based on a gesture of a driver (Sugita 2; para. 35: the timer unit 45 measures time when a person other than the operator (who carries the portable device 11), such as a passenger, performs a passenger's door unlock operation. After the timer unit 45 starts measuring time, in response to detection of a driver's door unlock operation performed by the operator (when the driver's door unlock sensor 31a detects a driver's door unlock operation and it is determined that communication with the portable device 11 has been established), the smart control unit 33 transmits the result of time measurement performed by the timer unit 45 to the centralized control unit), performing a door opening control based on a gesture of the passenger (Sugita; para. 43: when the result of time measurement performed by the timer unit 45 is within a certain period of time (when a driver's door unlock operation is performed while measuring a certain period of time), the centralized control unit 39 unlocks all the door locks; para. 80: the operator performs a driver's door unlock operation to unlock the door, and the driver's door is actually opened. At this time point, the passengers' doors may be unlocked. In this case, the time point at which the driver's door is opened may be within a certain period of time, or, like the above-described embodiment, a driver's door unlock operation may be within the certain period of time).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the authentication ECU of Sugita 1, as modified, to open the passenger’s door, in response to the passenger’s gesture, after the driver’s door is opened, as disclosed by Sugita 2, with the motivation of alleviating the operation load on the operator thereby reducing the waiting time until the driver's door and the passengers' doors are unlocked (Sugita 2; para. 14).
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita 1 in view of Van Wiemeersch and Hsieh (US 9,079,749).
Regarding claim 10, as best understood, Sugita 1 discloses a control system which is a vehicle control system (Sugita 1; para. 22: a vehicle door opening/closing device for controlling opening and closing of a vehicle door is described as an example of a vehicle control system), comprising: a zone A-ECU (Sugita 1; fig. 1: door control ECU 14) including a first processor (Sugita 1; para. 25: door control ECU 14 is similarly configured by a microcomputer including a CPU 14) for performing opening and closing control of a door of a vehicle (Sugita 1; para. 31: actuator 24 opens and closes a vehicle door 30 (see FIG. 3) at a vehicle rear section under the control of the door control ECU 14); and an entry ECU (Sugita 1; fig. 1: authentication ECU 12) including a second processor and a memory (Sugita 1; para. 24: authentication ECU 12 is configured by a microcomputer including a central processing unit (CPU) 12A, read only memory (ROM) 12B, random access memory (RAM) 12C, and so on).
Sugita 1 does not explicitly disclose the second processor acquires a vehicle outside image which is an image captured by a camera that captures an image outside the vehicle, detects a vehicle outside person present outside the vehicle based on the vehicle outside image acquired, detects gestures, to the vehicle, of the vehicle outside persons detected, and authenticates, based on a predetermined part of the vehicle outside persons detected, whether or not the vehicle outside persons detected are registered users who are a user of the vehicle registered in advance, the first processor when the second processor detects the plurality of registered users and the plurality of gestures made by the plurality of registered users substantially simultaneously, performs the door opening control based on the gesture of the registered user located at a position closest to a driver's seat of the vehicle.
Van Wiemeersch discloses a processor (Van Wiemeersch; fig. 4: microprocessor 82) acquires a vehicle outside image which is an image captured by a camera that captures an image outside a vehicle, detects vehicle outside persons present outside the vehicle based on the vehicle outside image acquired (Van Wiemeersch; para. 50: Each of the cameras 48A-48G may acquire images of zones in the space around the perimeter of the vehicle 10, particularly covering the door detection regions for the powered doors. The acquired images may be processed by a controller using video processing to identify objects such as one or more people as potential users and the position of the people relative to the vehicle 10 and the powered doors 22.), detects gestures, to the vehicle, of the vehicle outside persons detected (Van Wiemeersch; para. 60: If the user is still within the approach detection zone, routine 100 proceeds to step 116 to determine if the authorized mobile device has been within the approach detection zone for the time period of greater than 30 seconds and, if so, proceeds to step 118 to enter the approach detection zone welcome mode and the door detection zone expires and at step 120 where the closure only occurs with a manual actuation, before ending at step 190.; para. 59: step 120 where the closure will only power open on a manual handle grab or the closure panel switch press touch, or other activations such as a touch button on a phone, a smart device, a gesture in front of the door, etc.), and authenticates, based on a predetermined part of the vehicle outside persons detected, whether or not the vehicle outside persons detected are registered users who are a user of the vehicle registered in advance (Van Wiemeersch; para. 60: At decision step 124, routine 100 determines if the face or faces of one or more authorized users is viewable by the imaging cameras. If the face or faces are viewable, routine 100 proceeds to step 126 to track the facial characteristics of the potential authorized users, and then to decision step 128 to determine if the detected faces of the potential authorized users are located within a door detection zone. If authorized faces are detected within a door detection zone, routine 100 proceeds to step 130 to open the door closure for the authorized user using a face recognition or recognition gait signature for the user at the corresponding door detection boundary zone before ending at step 190.), and performs door opening control based on the gesture of the registered user located at a position closest to a driver's seat of the vehicle (Van Wiemeersch; fig. 2B: authorized user 60 enters door detection zone 52A).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the authentication ECU of Sugita 1 to transmit door control signals for one or more doors, in response to authenticating, based on exterior camera images, one or more approaching users, as disclosed by Van Wiemeersch, and to have further modified the door opening and closing control of the door control ECU of Sugita 1 to be performed in response to receiving the door control signals, as further disclosed by Van Wiemeersch, to yield the predictable result of allowing one or more passengers to enter the vehicle.
Sugita 1, as modified, does not explicitly disclose the processor detects the plurality of registered users and the plurality of gestures made by the plurality of registered users substantially simultaneously.
Hsieh, in a reasonably pertinent field of endeavor (node transportation system vehicle controls), discloses a processor detects a plurality of users and a plurality of gestures made by the plurality of users substantially simultaneously (Hsieh; col. 4, ll. 39-40: FIG. 18 shows a first user and a second user inputting two gesture directions simultaneously.).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the authentication ECU of Sugita 1 to respond to simultaneous user gestures, as disclosed by Hsieh, to yield the predictable result of opening a door for more than one person accessing the vehicle at the same time.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugita 1 in view of Jones, Chauhan and Park as applied to claim 1 above, and further in view of Kesevan et al. (US 2014/0058583), hereinafter Kesevan.
Regarding claim 11, as best understood, Sugita 1, as modified, discloses the first processor receives data from a Controller Area Network (CAN) bus (Sugita 1; authentication ECU 12 and the door control ECU 14 are both connected to an onboard network 16 such as a controller area network (CAN)), and can detect the predetermined motion before the second authentication is completed (Park; para. 101: When the user authentication has failed based on the exterior image acquired with the built-in cam BC in S402, in S406, the controller 140 may request user authentication retry.; paras. 106-108: controller 140 may activate the face recognition camera FC when the user authentication has failed … the controller 140 may perform the user authentication by determining the gesture of the object ob and determining whether the gesture of the object corresponds to the pre-stored pattern); and when the second authentication is completed successfully, the first processor performs the opening and closing control of the door of the vehicle based on the door control contents information (Jones; para. 32: the image data relating to the user's gait is passed to the moving image comparator module 36. The moving comparator module 36 retrieves the pre-recorded moving image identifier for the authorised user from the memory 38 and compares this with the captured moving image using an image recognition comparison algorithm. If there is a correspondence, the moving image comparator module 36 sends a signal 41 to the door lock control module 40 to indicate that the moving image matches the authorised moving image identifier).
Sugita 1, as modified, does not appear to explicitly disclose the first processor temporarily stores door control contents information corresponding to the predetermined motion, when the second authentication is completed successfully and performs the opening and closing control of the door of the vehicle based on the door control contents information stored; and when the second authentication is not completed successfully, the first processor discards the door control contents information stored.
Kesevan, in a reasonably pertinent field of endeavor (CAN data recorders), discloses temporarily storing and discarding CAN bus information (Kesevan; para. 22: peripheral devices 18 have additional memory or buffer 54 to store data. The secure memory 54 can be a circular buffer which gets rewritten. The secure memory retrieved data prior to any unusual events and for some period afterwards) that includes door information (Kesevan; para. 11: Microcontrollers in electronic control units in existing vehicles communicate with each other through a controller area network (CAN) bus. All messages pertaining to the vehicle, such as door status, engine, and alarm status are carried by this bus.).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified, upon receiving, from the vehicle’s CAN bus, the door control contents information before the second authentication is completed, the door control ECU of Sugita 1, as modified, to temporarily store the received CAN data in a circular buffer, as disclosed by Kesevan, to yield the predictable result of storing the data until the processor is ready to use it and ensuring there is room to store future data.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH THOMPSON whose telephone number is (571)272-3660. The examiner can normally be reached Mon-Thurs 9:00AM-3:00PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached on (571)270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Joseph Thompson/Examiner, Art Unit 3665
/Erin D Bishop/Supervisory Patent Examiner, Art Unit 3665