DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/09/2025 has been entered.
Status of Claims
This office action is in response to applicant’s amendments to application number 18/312,876 filed on 12/09/2025, in which Claims 1-20 are presented for examination. The applicant amends Claims 1-2, 12-13, and 20 and cancels Claims 4 and 15.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 5/5/2023 has been received and considered.
Response to Arguments
Applicant’s arguments, see pgs. , filed 12/09/2025, with respect to the drawings and claim objections have been fully considered and are persuasive. The drawings and claim objections set forth in the office action of 9/10/2025 have been withdrawn.
Applicant’s arguments, see pgs. , filed 12/09, with respect to the rejections of Claims 1- under 35 U.S.C. 10 have been considered but are moot because they are directed towards the amendments in Claims 1 and 12.
Applicant argues that the cited prior art, Eathakota, does not teach “detecting of the exit of the occupant,” and instead only teaches a user providing input to identify an exit from the vehicle. The broadest reasonable interpretation of “detecting” can include any form of the vehicle system identifying an exit of an occupant, which can include receiving a signal, such as a user input, to determine an exit. For example, application Claim 14 recites “determining that the occupant exited the vehicle,” compared with similar Claim 3 which recites “detecting that the occupant exited the vehicle.” However, in light of the amendments to Claims 1 and 12, which remove the language for “at least one of” the detected conditions, to require “detecting an exit of an occupant,” and to provide a more explicit teaching of “detecting an exit of an occupant” as used for determining vehicle actions or behavior, Examiner provides an updated rejection under 35 U.S.C. 103.
Applicant argues that Eathakota further does not teach the amended language for automatically detecting “by the identification recognition unit.” Examiner respectfully disagrees. The claim language does not recite, or further define, “automatically” detecting. Therefore, the broadest reasonable interpretation, as outlined above is maintained. Further, in view of In re Kuhle, 526 F.2d 553, 188 USPQ 7 (CCPA 1975), recitation of a specific module for performing all of the recited method steps, recitation of more than one module for performing individual method steps, or a combination thereof, is simply a rearrangement of parts, or a design choice. The steps of receiving, acquiring, controlling, detecting, and activating could be performed by the “identification recognition unit” or any combination of units and would be technically feasible with reasonable expectation of success, where the individual steps would be expected to work as intended.
In light of the amendments, Examiner provides an updated rejection for Claims 1-3, 5-14, and 16-20 under 35 U.S.C. 103. Further details are provided below.
Claim Objections
Claim 18 is objected to because of the following informalities:
Claim 18 (line 2): "system to::" should be "system to:".
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 6 and 1 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 6 (line recite the limitation "the unauthorized attempt." There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-6, 11-14, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al, PG Pub US-2021/0026345-A1 (herein "Ito") in view of Schwie et al., PG Pub US-2020/0301414-A1 (herein "Schwie"), Eathakota, PG Pub US-2021/0245708-A1 (herein "Eathakota"), and Gordon et al., Patent No. US-10,363,893-B2 (herein “Gordon”).
Regarding Claim 1, Ito discloses: A method comprising: receiving identification data of an object outside of a vehicle from an identification recognition unit. See [Ito, FIG. 9 and pg. 5, paras 0083-0084], which explain that a parent, or other user, inputs their data and a request for pickup of another person, or object, “[…] the portable terminal 14 in the possession of the user receives operation from the user. […] the user starts up an application preinstalled in the portable terminal 14, inputs personal information […], and attaches an image of their face. The user then presses a register button […] to complete receipt of the operation. [… and] transmits the personal information […] to the processing server 18.”
Ito further discloses: acquiring authorization data associated with an occupant in the vehicle. See [Ito, FIG. 11 and pg. 7, paras 0110-0114 and 0116], which explains that the occupant is authorized based on matching identification codes, “[0110] […] The user in the possession of the portable terminal 14 and the person outside the vehicle 12 are thereby able to converse. [0111][…] the CPU 20A acquires an authentication code from the unlocking terminal 15 through the communication I/F 20E. [0112] […] the CPU 20A performs a comparison to determine whether or not the authentication code acquired from the unlocking terminal 15 matches the unique authentication code of the vehicle 12 that is pre-stored in the storage 20D […]. […] the CPU 20A determines that the acquired authentication code matches the unique authentication code of the vehicle 12. [… or] the CPU 20A determines that the acquired authentication code does not match the unique authentication code of the vehicle 12. [0113] […] the CPU 20A operates the door locking device 36 to unlock the side door 82. [… which] be opened and closed to allow a passenger to board […]. [0114] […] the CPU 20A issues an error report indicating that authentication has failed. […] such as "The authentication code does not match" on the second display 32B, and outputs an alarm sound through the speaker 32C. […] [0116] […] the CPU 20A transmits information indicating that unlocking was unsuccessful to the portable terminal 14. The user in the possession of the portable terminal 14 is thereby made aware that passenger boarding […] was not possible.”
Ito further discloses: controlling a locking mechanism of a door of the vehicle based on an authorization of the object, wherein the authorization of the object is based on the identification data and the authorization data, wherein the controlling comprises unlocking the door when the object is authorized, and locking or keeping the door locked based on the object being unauthorized. See again [Ito, FIG. 9 and pg. 5, paras 0083-0084], which explains that a parent, or other user, inputs their data and a request for pickup of another person, or object. See also [Ito, FIG. 12 and pg. 8, paras 0126, 0129-0130, and 132], which explains the verification of the user based on the previously provided image, associated with the pick-up, or drop-off, request, “[126] The authentication section 280 […] determines authentication to be successful in cases in which a facial image of the person outside the vehicle acquired in advance from the unlocking terminal 15 matches an image captured by the camera 24A. […] [0129] […] the CPU 20A captures an image of a person outside the vehicle near to the side door 82 using the camera 24A. [0130] [… and] performs a comparison to determine whether or not the facial image acquired from the unlocking terminal 15 matches the facial image included in the captured image as a result of authentication processing. […] the CPU 20A determines that the acquired facial image matches the captured facial image. [… or] determines that the acquired facial image does not match the captured facial image. […] [0132] […] the present exemplary embodiment performs unlocking based on facial authentication […]. Also see again [Ito, FIG. 11 and pg. 7, paras 0112-0113 and 0116], which explains that the door is unlocked when the user is authorized based on matching identification codes. Also see [Ito, pg. 1, paras 0013-0014], which explains the locking and unlocking based on authentication, “[0013] […] a locking section configured to lock and unlock the door section, and an authentication section configured to perform authentication to cause the locking section to perform unlocking based on information from outside the vehicle. [0014] […] the door section locked by the locking section is configured capable of being unlocked on the basis of authentication by the authentication section.” Also see [Ito, pg. 3, para 0050], which explains that the door locking device can lock and unlock the side door, "The door locking device 36 serves as a locking section, and is capable of locking and unlocking the side door 82, serving as a door section. The door locking device 36 includes a locking mechanism for locking the side door 82 and an actuator to drive the locking mechanism. Being "locked" by the door locking device 36 refers to a state in which the side door 82 is locked, and being "unlocked" by the door locking device 36 refers to a state in which the side door 82 is unlocked." Also see [Ito, pg. 4, para 0060], which further explains the authentication steps and the resulting unlocking, “The authentication section 280 includes functionality to perform authentication to unlock the side door 82. In the present exemplary embodiment, the authentication section 280 determines authentication to be successful in cases in which an authentication code acquired from the unlocking terminal 15 matches the authentication code stored in the ROM 20B or the storage 20D. The authentication section 280 operates the door locking device 36 to unlock the side door 82 in cases in which authentication has been determined to be successful. This enables a person outside the vehicle to open and close the side door 82.” Finally see [Ito, pg. 7, para 0123], which explains that that the door is locked door prior to being unlocked in response to authentication, "In the vehicle 12 of the present exemplary embodiment, the side door 82 locked by the door locking device 36 can be unlocked by performing authentication with the authentication section 280. The present exemplary embodiment thereby enables boarding of a passenger unconnected to the user or loading of a package unconnected to the user to be prevented."
Ito does not disclose: detecting by the identification recognition unit, security response, wherein the security response comprises disabling a security feature of the vehicle in response to the detecting of the exit of the occupant
However, Schwie teaches: detecting by the identification recognition unit, . See [Schwie, pg. 2, para 0037], which explains that the vehicle management system determines that the rider has exited the vehicle using imaging, “In some embodiments, the vehicle management system is configured to determine that the first rider has exited the vehicle. The vehicle management system can be configured to cause the camera system to take a first interior image of the interior of the vehicle in response to determining that the first rider has exited the vehicle.” See also [Schwie, pg. 3, para 0040], which further explains that the vehicle management system can use location, wireless communication status, and imaging to determine that a rider has exited or entered the vehicle, “In some embodiments, the vehicle management system is configured to determine that the first rider has exited the vehicle in response to (1) receiving a location of a remote computing device associated with the first rider and determining that the location is not inside the vehicle, (2) failing to detect a direct wireless communication from the remote computing device to an antenna of the vehicle, (3) determining, by the image analysis system, that a second interior image does not show the first rider, and/or (4) determining, by the image analysis system, that an infrared image of the interior of the vehicle does not show the first rider.”
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Ito with Schwie to include using the vehicle system to detect the rider, or occupant, exiting the vehicle. Doing so allows the self-driving vehicle to be self-reliant for performing subsequent actions, based on specific events, [Schwie, pg. 1, para 0012], such as the occupant behavior and needs [Schwie, pg. 1, para 0013]. Additionally, it allows the self-driving vehicle to determine vehicle availability to pick up a second rider, or occupant [Schwie, pg. 16, para 0245] or perform a subsequent behavior [Schwie, pg. 17, para 0254].
However, Eathakota teaches: […] activating a security response, wherein the security response comprises disabling a security feature of the vehicle in response to the […] of the exit of the occupant. See [Eathakota, pg. 3, para 0040], which explains that the user confirms they have exited the vehicle, “Once the temporary access has expired the key or fob is returned to the keybox 120, and the user exits the vehicle. The user then confirms that the vehicle is empty using the software application 40 and the vehicle autonomously navigates back to the car lot in a "Return Vehicle to Lot" step 250.” See also [Eathakota, pg. 3, Claim 10], which summarizes that the temporary access is ended and the vehicle is returned, “10. The method of claim 1, wherein transporting the vehicle from the user to the storage space subsequent to the user's access comprises providing the user with a confirmation that the user is ending the temporary access.” See also [Eathakota, Abstract], which further explains that the smartlock features are turned on with the temporary access, or off when it ends as stated above, and provides the user access to the system features, “A method for providing temporary access to a vehicle includes receiving a vehicle access request at a vehicle, transporting the vehicle from a storage space to a user originating the vehicle access request using the at least one of the semi-autonomous mode and the fully autonomous mode, providing the user access to the vehicle via a smartlock system, and transporting the vehicle from the user to the storage space subsequent to the user's access.” Finally see [Eathakota, pg. 3, para 0038-0043], which explains these features can include restrictions, such as geographic or time limitations, “[0038] […] the central server 30 schedules the temporary access and transmits the temporary access details to the smartlock system […]. Included within the scheduled temporary access is the geographic location where the user expects the vehicle 12 to arrive, and the time of arrival. […] [0039] Once the vehicle 12 arrives at the user’s location, a multi-factor authentication is provided to the user […]. If the user passes the multi factor authentication, the keybox 120 in the smartlock system opens, and the user is granted access to the key or fob. […] [0041] In some examples, the geographic range or route of the vehicle can be limited during the temporary access. […] [0042] In some examples, the limited geographic zone is stored within a local memory 104 of the smartlock system 16. […or] can be stored remotely […] and the smartlock system 16 retrieves the limited geographic zone parameters through the input/output device 130. [0043] [… or] the smartlock system 16 interfaces with the controller 14 of the vehicle 12 via the communication system 110. The interfacing allows the software 40 communicate with the controller 14 and act in place of the key or fob.”
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Eathakota to include detecting the occupant has exited the vehicle and subsequently disabling security features. Doing so provides the user with access appropriate for their temporary usage, but allows any security features to be turned off when the vehicle is returned, which is ideal for test drives and rental cars [Eathakota, pg. 1, para 0003] that are stored at other locations [Eathakota, pg. 1, para 0004] and the vehicle is used by others.
Regarding Claim 2, Ito as modified discloses the limitations of Claim 1.
Ito further discloses: receiving an input of a destination location. See [Ito, pg. 3, para 0055], which explains that the travel plan includes a pre-set destination, “The travel plan generation section 230 includes functionality to generate a travel plan to cause the vehicle 12 to travel based on the position information acquired by the position acquisition section 200, the peripheral information acquired by the peripheral information acquisition section 210, and the vehicle information acquired by the vehicle information acquisition section 220. The travel plan includes not only a travel route to a pre-set destination […].”
Ito further discloses: acquiring a location of the vehicle. See [Ito, FIG.’s 2 and 4 and pg. 3, para 0043], which explains that the GPS device measures the vehicle location, “The GPS device 22 is a device for measuring the current position of the vehicle 12. The GPS device 22 includes an antenna to receive signals from GPS satellites,” and [Ito, pg. 3, para 0052], which further explains that the position acquisition section acquires the vehicle location from the GPS, “The position acquisition section 200 includes functionality to acquire the current position of the vehicle 12. The position acquisition section 200 acquires position information from the GPS device 22 through the input/output I/F 20F.”
Ito further discloses: determining that the location of the vehicle matches the destination location of the vehicle; and unlocking the door of the vehicle in response to determining that the location of the vehicle matches the destination location of the vehicle. See [Ito, pg. 6, para 0097], which explains that after vehicle operation, the vehicle controller device determines that the vehicle has arrived at its destination and proceeds with next steps, “the CPU 20A of the vehicle 12 determines whether or not the vehicle 12 has arrived at the designated dispatch destination. Processing proceeds to the next step S31 in cases in which the vehicle has arrived at the dispatch destination. Note that the CPU 20A may determine the vehicle 12 to have arrived when the vehicle 12 has come within a predetermined range of the dispatch destination.” See also [Ito, FIG. 10] which identifies the next steps (S31 and S34) as the vehicle “presentation and unlocking process.”
Regarding Claim 3, Ito as modified discloses the limitations of Claim 1.
Ito does not disclose: driving the vehicle to a base location in response to the detecting that the occupant exited the vehicle.
However, Schwie teaches: driving the vehicle […] in response to detecting that the occupant exited the vehicle. See again [Schwie, pg. 2, para 0037], which explains that the vehicle management system determines that the rider has exited the vehicle using imaging and [Schwie, pg. 3, para 0040], which further explains that the vehicle management system can use location, wireless communication status, and imaging to determine that a rider has exited or entered the vehicle. See also [Schwie, pg. 2, para 0038], which explains that the after an occupant has exited the vehicle, the imaging system may also determine the vehicle needs to be cleaning and the self-driving vehicle will drive to a cleaning facility, “In some embodiments, the maintenance system further comprises an image analysis system having at least one processor and a memory comprising program instructions (e.g., code modules configured to be executed by one or more computers) that when executed by the at least one processor are configured to cause the image analysis system to detect the item left behind by analyzing the first interior image taken by the camera system after the first rider has exited the vehicle. The first location can be a vehicle cleaning facility. The vehicle management system can be configured to drive the vehicle to the vehicle cleaning facility to remove the item in response to the image analysis system detecting the item.” Finally see [Schwie, pg. 17, para 0254], which further explains that the vehicle management system can cause various vehicle behaviors including parking the vehicle, idling the vehicle, or driving the vehicle to a user’s home, “Some embodiments include methods of using the vehicle management system 65 to operate the self-driving vehicle 2. The vehicle management system 65 is configured to be communicatively coupled with a remote computing device 12, which is configured to operate software, such as an iPhone application or an Android application adapted to enable a user to control behaviors of the self-driving vehicle 2. Behaviors can include actions and non-actions of the self-driving vehicle 2, such as picking up the user at a location, picking up the user at a time based on a schedule of the user or a time based on past pick-up times, remaining idle, driving to a residence of the user, pulling out of a garage, parking the vehicle, getting gas, charging the vehicle, and the like.”
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Ito with Schwie to include using the vehicle system to detect the rider, or occupant, exiting the vehicle and drive the vehicle in response. Doing so allows the self-driving vehicle to be self-reliant for performing subsequent actions, based on specific events, [Schwie, pg. 1, para 0012], such as the occupant behavior and needs [Schwie, pg. 1, para 0013]. Additionally, it allows the self-driving vehicle to determine vehicle availability to pick up a second rider, or occupant [Schwie, pg. 16, para 0245] or perform a subsequent behavior, such as sending the vehicle home [Schwie, pg. 17, para 0254] or for cleaning the vehicle [Schwie, pg. 2, para 0038].
However, Eathakota teaches: driving the vehicle to a base location in response to […] the occupant exited the vehicle. See again [Eathakota, pg. 3, para 0040], which explains that when the user confirms they have exited the vehicle, temporary access is ended, and the vehicle is returned to the vehicle lot. See also [Eathakota, pg. 3, Claim 10], which summarizes that the temporary access is ended and the vehicle is returned.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Eathakota to include ensuring that an occupant has exited the vehicle and returning the vehicle to a lot. Doing so provides the user with only temporary access to a vehicle, which is ideal for test drives and rental cars [Eathakota, pg. 1, para 0003] that are stored at other locations, allowing the vehicles to be sent from the storage location to the user [Eathakota, pg. 1, para 0004] temporarily, and returned once the user is done [Eathakota, pg. 1, para 0015].
Regarding Claim 5, Ito as modified discloses the limitations of Claim 1.
Ito further discloses: acquiring identification data of the occupant; and acquiring the authorization data associated with the occupant to the vehicle based on the identification data of the occupant. See again [Ito, FIG. 11 and pg. 7, paras 0110-0114 and 0116], which explains that the occupant is authorized based on matching identification codes, where the occupant must provide the authentication code and the unlocking terminal determines if the authentication matches the stored data.
Regarding Claim 6, Ito as modified discloses the limitations of Claim 1.
Ito further discloses: wherein the detecting of the unauthorized attempt […] by the object is based on the identification data and the authorization data, and wherein the security response comprises providing a notification in response to the detecting the unauthorized attempt. See again, [Ito, FIG. 11 and pg. 7, paras 0112-0114], which explains that the vehicle checks the authentication and provides a notification of a failed authorization.
Ito does not explicitly disclose: attempt to open the door of the vehicle.
However, Gordon teaches: attempt to open the door of the vehicle. See again [Gordon, col 12, lines 1-15], which explains that the vehicle can detect the door handle being pulled by an unauthorized person, "sensors on the SDV may determine than an unauthorized person is pulling the handle of a passenger door, attempting to open the trunk, pushing or knocking at the window, utilizing a tool to force a lock open, etc., which may indicate an attempt at theft of the SDV."
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Gordon to include detecting an attempt to open the door handle by an unauthorized person. Doing so would allow the vehicle to stop an attempt where the user does not just approach the vehicle, but is attempting to get inside, steal from, or damage the vehicle, and further prevent access of the unauthorized person [Gordon, col 12, lines 1-15].
Regarding Claim 11, Ito as modified discloses the limitations of Claim 1.
Ito further discloses: operating the vehicle to drive autonomously. See [Ito, pg. 1, para 0007], which explains the autonomous capabilities of the vehicle, “The vehicle of the first aspect is capable of implementing autonomous driving and remote driving by the travel control section controlling the travel device. The autonomous driving is travel based on the peripheral information acquired from the peripheral information detection section by the acquisition section and the travel plan generated by the travel plan generation section,” and further [Ito, pg. 2, para 0034], which identifies the vehicle features that enable autonomous driving, “The vehicle 12 is configured so as to be capable of implementing autonomous driving in which the vehicle 12 travels independently by the vehicle controller device 20 based on a pre-generated travel plan, remote driving based on operation of the remote operation station 16 by a remote driver, and manual driving based on operation by an occupant of the vehicle 12 (namely, a driver).”
Regarding Claim 12, Ito discloses: A processing system comprising: a memory comprising computer-executable instructions; and a processor configured to execute the computer-executable instructions and cause the processing system. See [Ito, Abstract], which describes the system components, including a processor and a memory, “A vehicle including: […] and a processor coupled to the memory, the processor being configured to: acquire peripheral information peripheral to a vehicle body from a peripheral information detection section, generate a travel plan based on the peripheral information, and control the travel device so as to perform autonomous driving in which travel is based on the generated travel plan […].” See also [Ito, FIG.'s 2, 5, and 7 and pg. 8, para 0133], which shows the various CPUs, or processors, of the system, “Note that the various processing executed by the CPU 20A, the CPU 40A, and the CPU 60A reading software (programs) in the exemplary embodiments described above may be executed by various processors other than CPUs. Examples of such processors include programmable logic devices (PLDs) such as field-programmable gate arrays (FPGAs) […], or dedicated electrical circuits these being processors such as application specific integrated circuits (ASICs) […]. The various processing may be executed using one of these processors, or may be executed by a combination of two or more processors of the same type or different types to each other […]. A more specific example of a hardware structure of these various processors is electric circuitry combining circuit elements such as semiconductor elements.”
And: receive identification data of an object outside of a vehicle from an identification recognition unit; acquire authorization data associated with an occupant in the vehicle. See again [Ito, FIG. 9 and pg. 5, paras 0083-0084], which explains that a parent, or other user, inputs their data and a request for pickup of another person, or object and [Ito, FIG. 11 and pg. 7, paras 0110-0114 and 116], which explains that the occupant is authorized based on matching identification codes.
Ito further discloses: control a locking mechanism of a door of the vehicle based on an authorization of the object, wherein the authorization of the object is based on the identification data and the authorization data, wherein the controlling comprises unlocking the door when the object is authorized, and locking or keeping the door locked based on the object being unauthorized. See again [Ito, FIG. 9 and pg. 5, paras 0083-0084], which explains that a parent, or other user, inputs their data and a request for pickup of another person, or object and [Ito, FIG. 12 and pg. 8, paras 0126, 0129-0130, and 132], which explains the verification of the user based on the previously provided image, associated with the pick-up, or drop-off, request. Also see again [Ito, FIG. 11 and pg. 7, paras 0112-0113 and 116], which explains that the door is unlocked when the user is authorized based on matching identification codes. Also see [Ito, pg. 1, paras 0013-0014], which explains the locking and unlocking based on authentication. Also see [Ito, pg. 3, para 0050], which explains that the door locking device can lock and unlock the side door. Also see [Ito, pg. 4, para 0060], which further explains the authentication steps and the resulting unlocking. Finally see [Ito, pg. 7, para 0123], which explains that that the door is locked door prior to being unlocked in response to authentication.
Ito does not disclose: detect by the identification recognition unit
However, Schwie teaches: detect by the identification recognition unit . See [Schwie, pg. 2, para 0037, which explains that the vehicle management system determines that the rider has exited the vehicle using imaging, “In some embodiments, the vehicle management system is configured to determine that the first rider has exited the vehicle. The vehicle management system can be configured to cause the camera system to take a first interior image of the interior of the vehicle in response to determining that the first rider has exited the vehicle.” See also [Schwie, pg. 3, para 0040], which further explains that the vehicle management system can use location, wireless communication status, and imaging to determine that a rider has exited or entered the vehicle, “In some embodiments, the vehicle management system is configured to determine that the first rider has exited the vehicle in response to (1) receiving a location of a remote computing device associated with the first rider and determining that the location is not inside the vehicle, (2) failing to detect a direct wireless communication from the remote computing device to an antenna of the vehicle, (3) determining, by the image analysis system, that a second interior image does not show the first rider, and/or (4) determining, by the image analysis system, that an infrared image of the interior of the vehicle does not show the first rider.”
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Ito with Schwie to include using the vehicle system to detect the rider, or occupant, exiting the vehicle. Doing so allows the self-driving vehicle to be self-reliant for performing subsequent actions, based on specific events, [Schwie, pg. 1, para 0012], such as the occupant behavior and needs [Schwie, pg. 1, para 0013]. Additionally, it allows the self-driving vehicle to determine vehicle availability to pick up a second rider, or occupant [Schwie, pg. 16, para 0245] or perform a subsequent behavior [Schwie, pg. 17, para 0254].
However, Eathakota teaches: […] activate a security response, wherein the security response comprises disabling a security feature of the vehicle in response to […] the exit of the occupant . See [Eathakota, pg. 3, para 0040], which explains that the user confirms they have exited the vehicle. See also [Eathakota, pg. 3, Claim 10], which summarizes that the temporary access is ended and the vehicle is returned. See also [Eathakota, Abstract], which further explains that the smartlock features are turned on with the temporary access, or off when it ends as stated above, and provides the user access to the system features. Finally see [Eathakota, pg. 3, para 0038-0043], which explains these features can include restrictions, such as geographic or time limitations.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Eathakota to include detecting the occupant has exited the vehicle and subsequently disabling security features. Doing so provides the user with access appropriate for their temporary usage, but allows any security features to be turned off when the vehicle is returned, which is ideal for test drives and rental cars [Eathakota, pg. 1, para 0003] that are stored at other locations [Eathakota, pg. 1, para 0004] and the vehicle is used by others.
Regarding Claim 13, Ito as modified discloses the limitations of Claim 12.
Ito further discloses: wherein the processor is further configured to execute the computer-executable instructions to cause the processing system to: receive an input of a destination location; acquire a location of the vehicle; determine that the location of the vehicle matches the destination location of the vehicle; and unlock the door of the vehicle in response to determining that the location of the vehicle matches the destination location of the vehicle. See again [Ito, pg. 3, para 0055], which explains that the travel plan includes a pre-set destination, [Ito, FIG.’s 2 and 4 and pg. 3, para 0043], which explains that the GPS device measures the vehicle location, and [Ito, pg. 3, para 0052], which further explains that the position acquisition section acquires the vehicle location from the GPS. See again, [Ito, pg. 6, para 0097], which explains that after vehicle operation, the vehicle controller device determines that the vehicle has arrived at its destination and proceeds with next steps and [Ito, FIG. 10] which identifies the next steps (S31 and S34) as the vehicle “presentation and unlocking process.”
Regarding Claim 14, Ito as modified discloses the limitations of Claim 12.
Ito does not disclose: wherein the processor is further configured to execute the computer-executable instructions to cause the processing system to: drive the vehicle to a base location in response to determining that the occupant exited the vehicle.
However, Schwie teaches: drive the vehicle […] in response to determining that the occupant exited the vehicle. See again [Schwie, pg. 2, para 0037, which explains that the vehicle management system determines that the rider has exited the vehicle using imaging and [Schwie, pg. 3, para 0040], which further explains that the vehicle management system can use location, wireless communication status, and imaging to determine that a rider has exited or entered the vehicle. Also see again [Schwie, pg. 2, para 0038], which explains that the after an occupant has exited the vehicle, the imaging system may also determine the vehicle needs to be cleaning and the self-driving vehicle will drive to a cleaning facility. Finally see again [Schwie, pg. 17, para 0254], which further explains that the vehicle management system can cause various vehicle behaviors including parking the vehicle, idling the vehicle, or driving the vehicle to a user’s home.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Ito with Schwie to include using the vehicle system to detect the rider, or occupant, exiting the vehicle and drive the vehicle in response. Doing so allows the self-driving vehicle to be self-reliant for performing subsequent actions, based on specific events, [Schwie, pg. 1, para 0012], such as the occupant behavior and needs [Schwie, pg. 1, para 0013]. Additionally, it allows the self-driving vehicle to determine vehicle availability to pick up a second rider, or occupant [Schwie, pg. 16, para 0245] or perform a subsequent behavior, such as sending the vehicle home [Schwie, pg. 17, para 0254] or for cleaning the vehicle [Schwie, pg. 2, para 0038].
However, Eathakota teaches: wherein the processor is further configured to execute the computer-executable instructions to cause the processing system to: drive the vehicle to a base location in response to [… determining] the occupant exited the vehicle. See again [Eathakota, pg. 3, para 0040], which explains that the user confirms they have exited the vehicle and temporary access is ended and the vehicle is returned to the vehicle lot. See also [Eathakota, pg. 3, Claim 10], which summarizes that the temporary access is ended and the vehicle is returned.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Eathakota to include ensuring that an occupant has exited the vehicle and returning the vehicle to a lot. Doing so provides the user with only temporary access to a vehicle, which is ideal for test drives and rental cars [Eathakota, pg. 1, para 0003] that are stored at other locations, allowing the vehicles to be sent from the storage location to the user [Eathakota, pg. 1, para 0004] temporarily, and returned once the user is done [Eathakota, pg. 1, para 0015].
Regarding Claim 16, Ito as modified discloses the limitations of Claim 12.
Ito further discloses: wherein the processor is further configured to execute the computer-executable instructions to cause the processing system to: acquire identification data of the occupant; and acquire the authorization data associated with the occupant to the vehicle based on the identification data of the occupant. See again [Ito, FIG. 11 and pg. 7, paras 0110-0114 and 116], which explains that the occupant is authorized based on matching identification codes, where the occupant must provide the authentication code and the unlocking terminal determines if the authentication matches the stored data.
Regarding Claim 17, Ito as modified discloses the limitations of Claim 12.
Ito further discloses: wherein to detect the unauthorized attempt […] by the object is based on the identification data and the authorization data; and wherein the security response comprises the processor executing the computer-executable instructions to cause the processing system to provide a notification in response to the detecting of the unauthorized attempt. See again, [Ito, FIG. 11 and pg. 7, paras 0112-0114], which explains that the vehicle checks the user input authentication to detect an authorized, or unauthorized, attempt and provides a notification of a failed authorization.
Ito does not explicitly disclose: attempt to open the door of the vehicle.
However, Gordon teaches: attempt to open the door of the vehicle. See [Gordon, col 12, lines 1-15], which explains that the vehicle can detect the door handle being pulled by an unauthorized person.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Gordon to include detecting an attempt to open the door handle by an unauthorized person. Doing so would allow the vehicle to stop an attempt where the user does not just approach the vehicle, but is attempting to get inside, steal from, or damage the vehicle, and further prevent access of the unauthorized person [Gordon, col 12, lines 1-15].
Claims 7-9 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ito in view of Schwie, Eathakota, and Gordon and further in view of Huang et al., US-11,099,558-B2 (herein "Huang").
Regarding Claim 7, Ito as modified discloses the limitations of Claim 1.
Ito does not disclose: allowing remote control of the vehicle in response to a triggering event.
However, Huang teaches: allowing remote control of the vehicle in response to a triggering event. See [Huang, col 2, lines 20-43], which explain that the vehicle can be remotely controlled, “a remote operator […] may have at least partial control of the vehicle or other object […], and may provide controls for the vehicle or other object using a remote control system.” See [Huang, FIG. 3A], which shows B302, the step to determine that a transfer to remote control is required. See also [Huang, col 4-5, lines 45-67 and 1-15], which describes the transfer determination step, “[…] a remote operator may be transferred at least partial control of the vehicle or other object in response to a determination (e.g., by the autonomous vehicle or other object) that the vehicle or object cannot or should not (e.g., based on rules, conditions, constraints, etc.) navigate a situation or environment (e.g., debris blocking a safe path, rules of the road prevent the vehicle from proceeding a certain way, a dangerous condition has presented itself, such as a fallen tree or power line, etc.). See also, [Huang, col 8, lines 12-26], which further describes the transfer determination step, “However, in the current autonomous vehicle control system 100, the vehicle 102 may determine, in response to encountering the situation represented in the image 146, to transfer 15 at least partial control to the remote control system 106. In other examples, the determination to transfer the control of the vehicle 102 (e.g., to initiate a remote control session) may be made by the remote operator (or otherwise may be made at the remote control system 106); by a passenger of 20 the vehicle 102 (e.g., using a command or signal, such as a voice command, an input to a user interface element, a selection of a physical button, etc.); and/or by another actor. For example, sensor data may be analyzed at the remote control system 106 (and/or by another system remote from the vehicle 102) and may be used to determine whether a remote control session should be initiated.”
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Ito with Huang to include allowing remote control of the vehicle, in response to a trigger. Doing so would allow the vehicle to be safely operated in scenarios where autonomous control is not safe [Huang, col 1, lines 17-24] or provide an option when there is no driver or driver controls, such as an autonomous robo-taxi [Huang, col 1, lines 35-41].
Regarding Claim 8, Ito as modified discloses the limitations of Claim 7.
Ito further discloses: the triggering event includes an unauthorized attempt […] by the object when the object is not authorized to pick up the occupant detected based on the identification data and the authorization data. See again, [Ito, FIG. 11 and pg. 7, paras 0112-0114], which explains that the vehicle checks the user-input authentication to detect an authorized, or unauthorized, attempt and provides a notification of a failed authorization.
Ito does not explicitly disclose: attempt to open the door of the vehicle.
However, Gordon teaches: attempt to open the door of the vehicle. See [Gordon, col 12, lines 1-15], which explains that the vehicle can detect the door handle being pulled by an unauthorized person.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have further modified Ito, as modified by Huang, with Gordon to include detecting an attempt to open the door handle by an unauthorized person. Doing so would allow the vehicle to stop an attempt where the user does not just approach the vehicle, but is attempting to get inside, steal from, or damage the vehicle, and further prevent access of the unauthorized person [Gordon, col 12, lines 1-15].
Regarding Claim 9, Ito as modified discloses the limitations of Claim 7.
Ito as modified does not disclose: creating a digital twin based on image data from the vehicle to provide augmented control of the vehicle.
However, Huang teaches: creating a digital twin based on image data from the vehicle to provide augmented control of the vehicle. See [Huang, col 2, lines 20-43], which explains that the vehicle sensor data can be used to render a virtual vehicle and environment, “Sensor data from the vehicle or other object may be sent from the vehicle or the other object to the remote control system, and the remote control system may generate and render a virtual environment for display using a VR system (e.g., on a display of a VR headset). The remote operator (e.g., a human, a robot, etc.) may provide controls to a control component(s) of the remote control system to control a virtual representation of the vehicle or other object in the virtual environment. The controls from the remote control system may then be sent […] to the vehicle or other object, and the vehicle or other object may execute controls that are based on the controls from the remote control system.” See [Huang, col 4-5, lines 45-67 and 1-15], which explains that the sensor data includes image data, “Sensor data (e.g., from cameras, LIDAR sensors, RADAR sensors, microphones, etc.) representative of fields of view of the sensors of the vehicle or object may be generated and transmitted to a control system (e.g., the system used by the remote operator).” And finally see [Huang, col 5, lines 62-64], which explains that the virtual vehicle and environment can be used by the remote operator to control the vehicle, “The remote operator may use a view of the virtual environment and/or the control components of the control system to control the vehicle in the physical environment,” [Huang, col 9, lines 23-48] using a remote control, augmented reality, system, “The remote control system 106 may include a virtual environment generator 114, a VR headset 116, and a remote control(s) 118. The virtual environment generator 114 may use the sensor data, the vehicle state data, and/or the calibration data to generate a virtual environment that may represent the environment […] in the field(s) of view of the sensor(s) 110 of the vehicle 102 […], as well as represent at least a portion of the vehicle 102 […] and/or controls of the vehicle 102 35 […]. In some examples, the virtual environment may include virtual representations of portions of the vehicle 102 that may not be visible to a driver or passenger of the vehicle 102 in the real-world environment […].”
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Huang to include augmented control for the remote operator. Doing so would allow the remote operator to fully immerse in the vehicle operation, giving the operator an intuitive, natural sense of the environment and controls, and providing safer operation, which cannot be done with 2D formats [Huang, col 1, lines 52-63].
Regarding Claim 18, Ito as modified discloses the limitations of Claim 12.
Ito does not disclose: wherein the processor is further configured to execute the computer-executable instructions to cause the processing system to: allow remote control of the vehicle in response to a triggering event.
However, Huang teaches: wherein the processor is further configured to execute the computer-executable instructions to cause the processing system to: allow remote control of the vehicle in response to a triggering event. See [Huang, col 2, lines 20-43], which explains that the vehicle can be remotely controlled. See also [Huang, col 4-5, lines 45-67 and 1-15], which describes the transfer determination step can include triggers such as vehicle environment. See also, [Huang, col 8, lines 12-26], which further describes the transfer determination step can be made based on the situation, the operator, the passenger, or sensor data.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify Ito with Huang to include allowing remote control of the vehicle, in response to a trigger. Doing so would allow the vehicle to be safely operated in scenarios where autonomous control is not safe [Huang, col 1, lines 17-24] or providing an option when there is no driver or driver controls, such as an autonomous robo-taxi [Huang, col 1, lines 35-41].
Regarding Claim 19, Ito as modified discloses the limitations of Claim 18.
Ito further discloses: the triggering event includes an unauthorized attempt […] by the object when the object is not authorized to pick up the occupant detected based on the identification data and the authorization data. See again, [Ito, FIG. 11 and pg. 7, paras 0112-0114], which explains that the vehicle checks the user-input authentication to detect an authorized, or unauthorized, attempt and provides a notification of a failed authorization.
Ito does not explicitly disclose: attempt to open the door of the vehicle.
However, Gordon teaches: attempt to open the door of the vehicle. See [Gordon, col 12, lines 1-15], which explains that the vehicle can detect the door handle being pulled by an unauthorized person.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have further modified Ito, as modified by Huang, with Gordon to include detecting an attempt to open the door handle by an unauthorized person. Doing so would allow the vehicle to stop an attempt where the user does not just approach the vehicle, but is attempting to get inside, steal from, or damage the vehicle, and further prevent access of the unauthorized person [Gordon, col 12, lines 1-15].
Regarding Claim 20, Ito as modified discloses the limitations of Claim 18.
Ito as modified does not disclose: wherein processor is further configured to execute the computer-executable instructions to cause the processing system to create a digital twin based on image data from the vehicle to provide augmented control of the vehicle.
However, Huang teaches: wherein processor is further configured to execute the computer-executable instructions to cause the processing system to create a digital twin based on image data from the vehicle to provide augmented control of the vehicle. See [Huang, col 2, lines 20-43], which explains that the vehicle sensor data can be used to render a virtual vehicle and environment and [Huang, col 4-5, lines 45-67 and 1-15], which explains that the sensor data includes image data. See [Huang, col 5, lines 62-64], which explains that the virtual vehicle and environment can be used by the remote operator to control the vehicle, [Huang, col 9, lines 23-48] using a remote control system that includes a virtual environment generator, a VR headset, and remote controls.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Huang to include augmented control for the remote operator. Doing so would allow the remote operator to fully immerse in the vehicle operation, giving the operator an intuitive, natural sense of the environment and controls, and providing safer operation, which cannot be done with 2D formats [Huang, col 1, lines 52-63].
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Ito in view of Schwie, Eathakota, and Gordon and further in view of Edwards, US-11,042,619-B2 (herein "Edwards").
Regarding Claim 10, Ito as modified discloses the limitations of Claim 1.
Ito does not disclose: the identification data corresponds to a key associated with the authorization data in a blockchain, and the authorization data is generated based on the key.
However, Edwards teaches: the identification data corresponds to a key associated with the authorization data in a blockchain, and the authorization data is generated based on the key. See [Edwards, col 7, lines 5-9], which describes the vehicle has occupant verification system, “The vehicle 200 can include the occupant verification system 260 or capabilities to support or interact with the occupant verification system 260 […].” See also [Edwards, col 8, lines 3-11], which explains the occupant verification system uses blockchain data to validate identity, “The occupant verification system 260 is more clearly described with reference to FIG. 4. [… and] can include one or more modules which integrate the distributed and immutable nature of a blockchain to evaluate and authenticate occupancy in a vehicle, to validate identity information received with relation to the one or more occupants, to evaluate trustworthiness of said 10 occupants, and/or forward to one or more modules or systems for authentication of occupant identity and quantity,” and [Edwards, col 8, lines 44-48], which explains that the occupant verification system accesses and verifies the data in the blockchain using a hash “The blocks can receive a verification code, such as a hash. Then, the occupant verification system 260 can forward the block and the verification code to other recipients in the network, for verification and incorporation into the blockchain ledger.” See also [Edwards, cols 9-10, lines 59-67 and 1-2], which explain that occupant identifiers can include various forms of occupant data as part of the identification data, “data as derived from a government issued identification (ID), law enforcement reports or other government interactions, self-reported information […], data from associated devices […], or others. In one example, the occupant identifiers can include at least one of the prior vehicle interaction, criminal history, passport, driver's license […], or combinations thereof. The occupant indicator and/or the one or more occupant identifiers can then be stored as part of the identification data 460 […].” And finally see [Edwards, cols 10-11, lines 65-67 and 1-5], which explain that the evaluation module accesses the blockchain data, which includes authenticity data, “The evaluation module 430 can include instructions to reference known information or indicia which are stored in blocks of the blockchain ledger. The blocks can contain authenticity data about one or more of the occupants. The authenticity data can include previously collected and stored information about a variety of vehicles and occupants which have interacted with the occupant verification system 260,” and [Edwards, col 11, lines 20-22] which further explains the evaluation module compares the authenticity data to verify the occupant, “The evaluation module 430 can then compare the occupant identifiers, as described above, to the authenticity data to corroborate the occupant identifiers as presented.”
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified Ito with Edwards to include using blockchain data to retrieve and generate authorization data. Doing so would add redundancy checks and allow access to previously recorded data [Edwards, cols 10-11, lines 65-67 and 1-5] and shared data [Edwards, col 13, lines 17-27] providing a broader data pool.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIN MARIE HARTMANN whose telephone number is (571)272-5309. The examiner can normally be reached M-F 7-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at (571) 270-3921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.M.H./Examiner, Art Unit 3664
/KITO R ROBINSON/Supervisory Patent Examiner, Art Unit 3664