Prosecution Insights
Last updated: April 19, 2026
Application No. 18/459,477

RETRIEVING LOST ITEMS IN THE VEHICLE

Final Rejection §103
Filed
Sep 01, 2023
Examiner
AZIMA, SHAGHAYEGH
Art Unit
2671
Tech Center
2600 — Communications
Assignee
FCA US LLC
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
286 granted / 350 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
36 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the applicant's communication filed on 12/18/2025. In virtue of this communication, claims 1-21 filed on 12/18/2025 are currently pending in the instant application. Claims 1, 12, 7, 19, and 20 have been amended without adding a new subject matter. Claim 8 has been canceled. New claim 21 has been added without adding anew subject matter. Response to Arguments Applicant’s arguments with respect to claim(s) 1-21 have been considered: - With regard to 35 U.S.C.112 rejection the rejection is withdrawn in view of amendment filed on 12/18/2025. - With regard to 35 U.S.C.101 rejection, the rejection is withdrawn in view of the amendment and arguments filed on 12/18/2025. - With regard to prior art rejection, the arguments are moot in view of new ground of rejection necessitated by the amendment filed on 12/18/2025. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5, 7,9, 11, 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taek (KR 102425345) also published as (KR20170038232), in view of Julian et al. (US 2021/0394775). As per claim 1, A method of identifying objects within a vehicle, comprising: “determining a presence of an object from one or more images from one or more cameras having a field of view that includes at least a portion of an interior of a vehicle;”(Taek, ¶[0010] discloses the object presence determination unit determines whether the object exists by using cameras mounted inside the vehicle or pressure detection sensors attached to seats of the vehicle.) “determining a type of the object or determining at least one attribute of the object;” (Taek, ¶[0016] discloses, the object information generation unit generates information about the object including at least one of information about the location of the object, information about the content of the object, and information about the number of the objects. ¶[0045] detection of object in different seats ¶[0069]. ¶[0098] discloses display frames indicating left-behind items, indicating long-term abandoned items that have been left in a vehicle more than twice, and can also indicate the quantity of left-behind items and the time of the most recent notification. Also, Figure 7, 9, section 550 and 570 data is object type. ¶[0073], ¶[0114], ¶[0130]) “and recording the presence of the object and one or both of the type of the object and the at least one attribute of the object.” (Taek, ¶[0035], ¶[0102-0104] discloses store images before, during the boarding, and post boarding. ¶[0107-0108] discloses detecting an object in the images and Fig. 9, displays the recognition range of each item in the form of a box and stores it as data (S550). ¶[0110] and ¶[0112].) “receiving a request relating to the object; transmitting a live camera image from the one or more camera.”(Taek, ¶[0034] discloses The present invention monitors objects in the vehicle in three dimensions (top, front) using 14 cameras inside the vehicle, and if an object remains, the AVN H/U (Head Unit) sends the driver a video of the location of the object and a notification message to a smartphone or smart key. ¶[0035] Smartphones come with a smartphone app (Object Reminder) that provides location images, markers, and related information about items, allowing you to conveniently view the information. ¶[0098] discloses when the smartphone app is automatically launched, the smartphone (110) notifies the driver of an image of an item displayed in the vehicle in the form of a notification message ) Taek does not explicitly disclose the following which would have been obvious in view of Julian from a similar field of endeavor “ transmitting a live camera image from the one or more camera.”(Julian, ¶[0075] discloses The remote alert transmission may include various types of information, data, the images or video associated with the alert from inside the vehicle. ¶[0076] discloses when a remote alert is transmitted, a remote device or party may be able to request and/or otherwise activate a live video feed from one or more of the cameras in the vehicle.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Julian technique of vehicle monitoring and remote reporting into Taek technique to provide the known and expected uses and benefits of Julian technique over object reminder in vehicle technique of Taek. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Julian to Taek in order to detect unsafe situation in the vehicle. (Refer to Julian paragraph [0004].) As per claim 2, The method of claim 1, Tek as modified by Julian further discloses “which also includes transmitting information relating to the presence of the object and one or both of the type of the object and the at least one attribute of the object.” (Taek, ¶[0008] disclose an object information generating unit for generating information about the object if the object is determined to exist inside the vehicle; and an object information transmitting unit for transmitting information about the object to a user terminal.¶[0018].) As per claim 3, The method of claim 2 “wherein the information transmitted includes text indicating the presence and type of the object.” (Taek, ¶[0027] discloses transmits a warning message to the remote key of the vehicle if it is determined that the object exists inside the vehicle. ¶[0135].) As per claim 4, The method of claim 2 “wherein the information transmitted includes one or more images from the one or more cameras that show an area of an interior of the vehicle in which the object is located.” (Taek, ¶[0034-0035] discloses sends the driver a video of the location of the object and a notification message to a smartphone or smart key. ) As per claim 5, The method of claim 1 “wherein the step of determining the presence of the object is accomplished with image recognition algorithms in a computing unit that associate an object within a field of view of the one or more cameras with a predetermined object type.” (Taek, ¶[0073] discloses among the objects recognized by the camera, there are cases where a person may still be sitting in the seat even though the person has finished getting off, so a function that can distinguish between objects and humans is added., ¶[0112] discloses Fig. 9, and displays the recognition range of each item in the form of a box and stores it as data (S570)) As per claim 7, The method of claim 1 “which also includes determining a movement of the object from a first location to a second location and recording the presence of the object at the second location.” (Taek, ¶[0035] provide location images markers, and related information about items. Figure 8, section S575, discloses comparing changed object/item images for change, retain, deleted, or new and S580 store the data regarding the item/object. ¶[0111] and figure 5, discloses afterwards, H/U (130) compares the video after getting off (440) with the initial video (410) (or the video before getting on (420), the video after getting on (430)) to determine whether there are any changes (S565). ¶[0112] discloses If it is determined that there is a changed part based on the judgment result of step S565, H/U (130) recognizes each changed image as an item as shown in (b) of Fig. 9, and displays the recognition range of each item in the form of a box and stores it as data (S570). ¶[0113] discloses afterwards, H/U (130) compares the object data image saved in step S550 with the object data image saved in step S570 and processes the previously saved images by classifying them as changed/deleted/maintained/new, etc. (S575). ¶[0114] discloses afterwards, H/U (130) displays the remaining data as an object on the image (440) as a result of object processing and calculates the number of objects (S580). Please take a look at heart symbol location changes.) “wherein the information transmitted includes the second location.” (Teak, Figure 5, shows change of location of objects in different rear seat position.) As per claim 9, The method of claim 2 “wherein the information includes one or more of the size, shape, color and location of the object.” (Teak, ¶[0016] discloses the object information generation unit generates information about the object including at least one of information about the location of the object , further Figure 5, shows the location of the object in the seat. ¶[0034].) As per claim 11, The method of claim 2 “wherein the information transmitted includes a present image or video feed from one or more of the one or more cameras.”(Taek, ¶[0034] discloses sends the driver a video of the location of the object and a notification message to a smartphone or smart key. ) As per claim 13, The method of claim 1 “which also includes determining removal of the object from the vehicle and recording that the object is not present within the vehicle.” (Taek, ¶[0099] discloses at the same time, when H/U (130) transmits only the presence or absence of an object to BCM. ¶[0113], figures 5, 7, 8, and 9 discloses H/U (130) compares the object data image saved in step S550 with the object data image saved in step S570 and processes the previously saved images by classifying them as changed/deleted/maintained/new, etc. (S575).) As per claim 14, The method of claim 13 “wherein the step of recording that the object is not present within the vehicle is accomplished by deleting the object from a list of objects recorded as being present in the vehicle.” (Taek, ¶[0099] discloses at the same time, when H/U (130) transmits only the presence or absence of an object to BCM. ¶[0113], figures 5, 7, 8, and 9 related paragraphs discloses H/U (130) compares the object data image saved in step S550 with the object data image saved in step S570 and processes the previously saved images by classifying them as changed/deleted/maintained/new, etc. (S575). As per claim 15, The method of claim 2 “wherein the transmitting step is accomplished by a communication unit of the vehicle and via a wireless transmission protocol.” (Taek, ¶[0034] discloses if an object remains, the AVN H/U (Head Unit) sends the driver a video of the location of the object and a notification message to a smartphone or smart key. Further ¶[0061], ¶[0097], ¶[0135].) Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taek (KR 102425345) also published as (KR20170038232), in view of Julian et al. (US 2021/0394775), further in view of Fritzsche et al. (US 2003/0098909). As per claim 6, The method of claim 5 , However Taek as modified by Julian is silent on the following which would have been obvious in view of Fritzsche from similar filed of endeavor “wherein the association is accomplished based upon one or more of the size, shape and color of the object.” (Fritzsche, ¶[0015] discloses the size of the vehicle occupants and/or objects in the vehicle can likewise be determined. ) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Fritzsche technique of monitoring the internal space of a vehicle into Taek as modified by Julian technique to provide the known and expected uses and benefits of Fritzsche technique over object reminder in vehicle technique of Taek as modified by Julian. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Fritzsche to Taek as modified by Julian in order to accurately detect objects in the vehicle internal space. (Refer to Fritzsche paragraph [0013].) Claim(s) 10, 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taek (KR 102425345) also published as (KR20170038232), in view of Julian et al. (US 2021/0394775), in view of Kondo et al. (US 2022/0270378). As per claim 10, The method of claim 1 “which includes establishing an identity of a person in the vehicle and associating the object with the person” ¶[0074] discloses performs facial recognition on the image input by a horizontal camera that monitors the frontal image and determines that it is a human face. However Taek as modified by Julian is silent on the following which would have been obvious in view of Kondo “which includes establishing an identity of a person in the vehicle and associating the object with the person when the object is determined to be moved by the person.” (Kondo, ¶[0020] discloses the owner specifying unit 112 may specify a user who possesses (or wears) the lost item detected by the lost item detection unit 111 in a vehicle interior image captured at a time of boarding of a plurality of users on the vehicle 30, as a candidate of the lost item owner. In this manner, with the detection process executed focusing on the belongings of the user at the time of boarding the vehicle, the owner candidate of the lost item may be specified at an early stage with high accuracy. ) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Kondo technique of lost item detection into Taek as modified by Julian technique to provide the known and expected uses and benefits of Kondo technique over object reminder in vehicle technique of Taek as modified by Julian. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Kondo to Taek as modified by Julian in order to provide a lost item detection in a driverless vehicle. (Refer to Kondo paragraph [0006].) As per claim 16, The method of claim 2, However Taek as modified by Julian does not explicitly disclose the following which would have been obvious in view of Kondo from similar filed of endeavor “wherein the transmitting step is accomplished by providing a signal within the vehicle to alert a vehicle occupant to the presence of the object in the vehicle.” (Kondo, ¶[0028] discloses the owner specifying unit 112 determines whether the owner candidate is still in the vehicle by using a known image processing technology, for example. When having determined that the owner candidate is still in the vehicle, the owner specifying unit 112 transmits (outputs) in-vehicle announcement information to the vehicle 30 through the network NW. With this process, the in-vehicle announcement information will be reproduced by using a speaker 35 installed in the vehicle 30. Incidentally, the “in-vehicle announcement information” is information for reminding that there is a lost item in the vehicle, and examples of this information are text information including information that may specify the lost item, voice information corresponding to the text information, and the like ) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Kondo technique of lost item detection into Taek as modified by Julian technique to provide the known and expected uses and benefits of Kondo technique over object reminder in vehicle technique of Taek as modified by Julian. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Kondo to Taek as modified by Julian in order to provide a lost item detection in a driverless vehicle. (Refer to Kondo paragraph [0006].) As per claim 17, The method of claim 16 Taek as modified by Julian as modified by kondo further discloses “which includes assigning a priority level to objects determined to be within the vehicle based upon a predetermined prioritization, and providing the signal when an object above a threshold priority level is determined to be within the vehicle and when it is determined that an occupant of the vehicle is likely to leave the vehicle.” (Kondo, ¶[0026] discloses In addition, when the lost item is detected by the lost item detection unit 111, the owner specifying unit 112 may specify the owner candidate only in a case where the lost item meets a predetermined requirement. Note that “the case meeting a predetermined requirement” is, for example, a case where it is not particularly necessary to manage or return an object detected as a lost item, specifically, corresponds to a case where the lost item is an empty may, an empty plastic bottle, a magazine, a newspaper, and the like. In this manner, even in a case where a lost item in the vehicle is detected, it is possible to reduce pointless search for the owner candidate by searching for the owner candidate only when necessary.) As per claim 18, The method of claim 17, Taek as modified by Julian as modified by Kondo further discloses “wherein the determination that an occupant is likely to leave the vehicle is based upon detecting an opening of a door of the vehicle, turning off an engine or motor of the vehicle, detecting that a vehicle operating mode is changed to a park mode, or a combination of two or more of these events.” (Taek, ¶[0009] discloses the vehicle status determination unit determines whether the vehicle is parked based on whether the vehicle's engine is turned off or whether the vehicle's doors are locked. Then see ¶[0121-0122].) Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taek (KR 102425345) also published as (KR20170038232), in view of Julian et al. (US 2021/0394775), in view of Sohmshetty et al. (US 2022/0253550). As per claim 12, The method of claim 2 however Taek as modified by Julian is silent on the following which would have been obvious in view of Sohmshetty from similar filed of endeavor “wherein the information transmitted includes a preselected image that is representative of the object and does not show the actual object.”(Sohmshetty, ¶[0032] disclose the occupants of vehicle 101 may select user preferences, e.g., which digital representations such as stick figures, avatars, celebrity likenesses, etc., via the infotainment system of vehicle 101. transform the captured image data by replacing the occupants in the captured image data with the preselected digital representations. At step 308, the transformed image data may be displayed on displays.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Sohmshetty technique of ensuring privacy in an autonomous vehicle into Taek as modified by Julian technique to provide the known and expected uses and benefits of Sohmshetty technique over object reminder in vehicle technique of Taek as modified by Julian. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Sohmshetty to Taek as modified by Julian in order to provide safety and comfort for occupants of AV and prevent any damage. (Refer to Sohmshetty paragraph [0001].) Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taek (KR 102425345) also published as (KR20170038232), in view of Julian et al. (US 2021/0394775), in view of Grace et al. (US 2023/0306754). As per claim 21, The method of claim 1, However Taek as modified by Julian does not explicitly discloses the following which would have been obvious in view of Grace from similar filed of endeavor “which also includes determining, during use of the vehicle, that the object has been moved from a first location that is within the field of view of the vehicle camera to a second location in the vehicle that is hidden from the field of view of the vehicle camera, recording the object as being within the second location”. (Grace, ¶[0012] discloses in autonomous vehicles during provision of ride and delivery service if an item falls into an area of the autonomous vehicle out of sight of the interior camera(s), (e.g., under one of the seats) The ability to track an out of sight (OOS) item in real time (or close to real time) within the cabin of an autonomous vehicle would provide autonomous vehicle services companies with the awareness that an item remains in a vehicle and the ability to alert the owner of the item to retrieve the item (e.g., by identifying to the owner the tracked location of the item within the vehicle cabin as well as the identity of the item). ¶[0059] discloses while the autonomous vehicle 510 is executing a ride or delivery, one or more items that may fall onto the floor of the vehicle 510 may be detected by one or more of the cameras 550. If and when the dropped item rolls out of “sight” of the cameras 550 (e.g., under one of the seats 530), the auxiliary sensors 560 as well as other detection modalities may enable the tracking of the item while it is obscured from the view of the cameras 550. ¶[0069] discloses the LOT system begins tracking the fallen item (e.g., by creating a tracking record for the item). In particular embodiments, the initial tracking information recorded for a fallen item includes the location at which the item fell (or landed) and the time at which the item fell. It will be recognized that execution of the LOT system may be initiated in response to other “triggers” and that a fallen item is only one such trigger. ¶[0070] discloses the tracking information may be updated to include a new location for the item (periodically and/or in response to detected movement of the item) and the time at which the item was in the designated location. Tracking of the fallen item may also include determining and recording an identity of the item (e.g., pen, book, mobile device), which identification may also be stored in the tracking information for the item. ¶[0071] discloses a determination is made whether the item is still within the view of the interior cameras/sensors. If it is determined that the item is still within the view of the interior cameras/sensors, or if it is determined that the item is obscured from the view of the interior cameras/sensors. ¶[0073] discloses the tracking information for the item is updated appropriately (e.g., with the inferred location(s) and/or existence of the item and the time(s) at which the inference(s) was/were made). ¶[0074] discloses a camera, may track the location and velocity of an item (e.g., a water bottle) onto the floor and under a seat because the movement of the water bottle is tracked (plus the information about the vehicle) ). “transmitting from the vehicle information relating to the presence of the object, the type of the object and the second location”(Grace, ¶[0077] discloses the fleet management system may be notified that a fallen item has been detected in order to initiate an alert to the owner of the item. the notification may include some or all of the item tracking information. ¶[0079] discloses notification may occur at any point during the operation of the LOT system as described in connection with FIG. 6. For example, notifications may occur immediately upon detection of a fallen item as well as after every change in status of the item as detected by the LOT system. the notification may include any or all of the tracking information for the item. ¶[0082-0083] disclose an alert is provided to the owner to retrieve the fallen item before exiting the autonomous vehicle. In some embodiments, the alert may be provided by playing an audio alert over one or more speakers provided within the cabin of the vehicle and/or via a user app provided on a mobile device of the owner. Additionally and/or alternatively, the alert may be provided visually using one or more display devices provided within the cabin of the vehicle and/or via the UI of a user app provided on a mobile device of the owner.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Grace technique of lost object tracking into Taek as modified by Julian technique to provide the known and expected uses and benefits of Grace technique over object reminder in vehicle technique of Taek as modified by Julian. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Grace to Taek as modified by Julian in order to provide a lost item detection in an autonomous vehicle. (Refer to Grace paragraph [0001].) Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taek (KR 102425345) also published as (KR20170038232), in view of Grace et al. (US 2023/0306754). As per claim 19, A vehicle camera image processing system, comprising: “a vehicle camera; at least one processor; and memory accessible by the at least one processor, the memory storing computer instructions that, when executed by the at least one processor, cause the image processing system to: receive image data captured by the vehicle camera;” (Taek, ¶[0010] discloses the object presence determination unit determines whether the object exists by using cameras mounted inside the vehicle or pressure detection sensors attached to seats of the vehicle.¶[0011] discloses the object presence determination unit uses cameras mounted inside the vehicle to photograph each seat of the vehicle in a horizontal direction, cameras to photograph each seat of the vehicle in a vertical direction. ¶[0083]). “determine a type of the object;” (Taek, ¶[0016] discloses, the object information generation unit generates information about the object including at least one of information about the location of the object, information about the content of the object, and information about the number of the objects. ¶[0045] detection of object in different seats ¶[0069]. ¶[0098] discloses display frames indicating left-behind items, indicating long-term abandoned items that have been left in a vehicle more than twice, and can also indicate the quantity of left-behind items and the time of the most recent notification. Also, Figure 7, 9, section 550 and 570 data is object type.) “and record in the memory the presence in the vehicle of the object and the type of the object” (Taek, ¶[0035], ¶[0102-0104] discloses store images before, during the boarding, and post boarding. ¶[0107-0108] discloses detecting an object in the images and Fig. 9, displays the recognition range of each item in the form of a box and stores it as data (S550). ¶[0110] and ¶[0112].) However Taek does not explicitly discloses the following which would have been obvious in view of Grace from similar field of endeavor “determine, during use of the vehicle, that the object has been moved from a first location that is within the field of view of the vehicle camera to a second location in the vehicle that is hidden from the field of view of the vehicle camera; record the object as being within the second location.” (Grace, ¶[0012] discloses in autonomous vehicles during provision of ride and delivery service if an item falls into an area of the autonomous vehicle out of sight of the interior camera(s), (e.g., under one of the seats) The ability to track an out of sight (OOS) item in real time (or close to real time) within the cabin of an autonomous vehicle would provide autonomous vehicle services companies with the awareness that an item remains in a vehicle and the ability to alert the owner of the item to retrieve the item (e.g., by identifying to the owner the tracked location of the item within the vehicle cabin as well as the identity of the item). ¶[0059] discloses while the autonomous vehicle 510 is executing a ride or delivery, one or more items that may fall onto the floor of the vehicle 510 may be detected by one or more of the cameras 550. If and when the dropped item rolls out of “sight” of the cameras 550 (e.g., under one of the seats 530), the auxiliary sensors 560 as well as other detection modalities may enable the tracking of the item while it is obscured from the view of the cameras 550. ¶[0069] discloses the LOT system begins tracking the fallen item (e.g., by creating a tracking record for the item). In particular embodiments, the initial tracking information recorded for a fallen item includes the location at which the item fell (or landed) and the time at which the item fell. It will be recognized that execution of the LOT system may be initiated in response to other “triggers” and that a fallen item is only one such trigger. ¶[0070] discloses the tracking information may be updated to include a new location for the item (periodically and/or in response to detected movement of the item) and the time at which the item was in the designated location. Tracking of the fallen item may also include determining and recording an identity of the item (e.g., pen, book, mobile device), which identification may also be stored in the tracking information for the item. ¶[0071] discloses a determination is made whether the item is still within the view of the interior cameras/sensors. If it is determined that the item is still within the view of the interior cameras/sensors, or if it is determined that the item is obscured from the view of the interior cameras/sensors. ¶[0073] discloses the tracking information for the item is updated appropriately (e.g., with the inferred location(s) and/or existence of the item and the time(s) at which the inference(s) was/were made). ¶[0074] discloses a camera, may track the location and velocity of an item (e.g., a water bottle) onto the floor and under a seat because the movement of the water bottle is tracked (plus the information about the vehicle) ). “and transmit from the vehicle information relating to the presence of the object, the type of the object and the second location”(Grace, ¶[0077] discloses the fleet management system may be notified that a fallen item has been detected in order to initiate an alert to the owner of the item. the notification may include some or all of the item tracking information. ¶[0079] discloses notification may occur at any point during the operation of the LOT system as described in connection with FIG. 6. For example, notifications may occur immediately upon detection of a fallen item as well as after every change in status of the item as detected by the LOT system. the notification may include any or all of the tracking information for the item. ¶[0082-0083] disclose an alert is provided to the owner to retrieve the fallen item before exiting the autonomous vehicle. In some embodiments, the alert may be provided by playing an audio alert over one or more speakers provided within the cabin of the vehicle and/or via a user app provided on a mobile device of the owner. Additionally and/or alternatively, the alert may be provided visually using one or more display devices provided within the cabin of the vehicle and/or via the UI of a user app provided on a mobile device of the owner.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Grace technique of lost object tracking into Taek technique to provide the known and expected uses and benefits of Grace technique over object reminder in vehicle technique of Taek. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Grace to Taek in order to provide a lost item detection in an autonomous vehicle. (Refer to Grace paragraph [0001].) Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taek (KR 102425345) also published as (KR20170038232), in view of Grace et al. (US 2023/0306754), further in view of Miller et al. (US 2018/0068192). As per claim 20, The system of claim 19, “which also includes a communication unit by which information about an object is wirelessly transmitted to a receiver located remotely from the vehicle;” (Taek, ¶[0034] discloses if an object remains, the AVN H/U (Head Unit) sends the driver a video of the location of the object and a notification message to a smartphone or smart key. Further ¶[0061], ¶[0097], ¶[0135].) However Taek as modified by Grace does not explicitly disclose the following which would have been obvious in view of Miller from similar filed of endeavor “and wherein the information includes an image or video file of the second location with the object not shown within the image or video file.” (Miller, figure 6, ¶[0005] disclose receiving, with an electronic control unit, a request from a remote device for one or more images of a vehicle interior. generating, one or more privacy images based on the one or more images and the privacy settings of the vehicle interior. Additionally, the method also includes controlling a transceiver to transmit the one or more privacy images to the remote device via an antenna. ¶[0045] discloses the privacy image 600 includes two portions of the privacy image 600, a first portion 605 and a second portion 610. As illustrated in FIG. 6, the first portion 605 and the second portion 610 completely censor portions (for example, the driver seat 130 and the right back seat 138, respectively) of the privacy image 600 where the ECU 120 determined the location of two occupants (for example, the first occupant 505 and the second occupant 510 as illustrated in FIG. 5) based on metadata from the image processing ECU 110 ) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Miller technique of vehicle interior imaging privacy into Taek as modified by Grace technique to provide the known and expected uses and benefits of Miller technique over object reminder in vehicle technique of Taek as modified by Grace. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Miller to Taek as modified by Grace in order to provide safety, security, comfort and convenience for the occupants of a vehicle. (Refer to Miller paragraph [0002].) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAGHAYEGH AZIMA whose telephone number is (571)272-1459. The examiner can normally be reached Monday-Friday, 9:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAGHAYEGH AZIMA/Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Sep 29, 2025
Non-Final Rejection — §103
Dec 18, 2025
Response Filed
Mar 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586350
DETERMINING AUDIO AND VIDEO REPRESENTATIONS USING SELF-SUPERVISED LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12573209
ROBUST INTERSECTION RIGHT-OF-WAY DETECTION USING ADDITIONAL FRAMES OF REFERENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12561989
VEHICLE LOCALIZATION BASED ON LANE TEMPLATES
2y 5m to grant Granted Feb 24, 2026
Patent 12530867
Action Recognition System
2y 5m to grant Granted Jan 20, 2026
Patent 12525049
PERSON RE-IDENTIFICATION METHOD, COMPUTER-READABLE STORAGE MEDIUM, AND TERMINAL DEVICE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month