Prosecution Insights
Last updated: April 19, 2026
Application No. 18/349,589

Method and System for Automatized Selection and Documentation of Personal Highlights During Journeys with a Motor Vehicle

Non-Final OA §102
Filed
Jul 10, 2023
Examiner
LEGGETT, ANDREA C.
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Kia Corporation
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
96%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
484 granted / 639 resolved
+20.7% vs TC avg
Strong +21% interview lift
Without
With
+20.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
34.8%
-5.2% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 639 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the claims filed on December 10, 2025. Claim 21 is amended; claims 2, 11 and 20 are canceled; and claims 1, 3-10, 12-19 and 21-22 are pending and examined below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3-10, 12-19 and 21-22 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Stumpf et al. (U.S. 2022/0366172). With regard to claim 1, Stumpf teaches a method for automatized selection and documentation of personal highlights during journeys with a vehicle ([abstract] images from inside the vehicle and outside the vehicle, and can be used to create a highlight reel of the ride), the method comprising: capturing journey data during a journey of the vehicle ([abstract] images from inside the vehicle and outside the vehicle, and can be used to create a highlight reel of the ride. The images can be captured automatically and include images of passengers during the ride), the journey data comprising audio, image, or video data taken of an interior or an exterior of the vehicle (Fig. 4; [abstract] images can include images from inside the vehicle and outside the vehicle; [0022] the photos include images captured inside the vehicle with interior cameras) wherein the journey data is captured continuously during the journey by a vehicle sensor system of the vehicle (Fig. 2, 210; [0083] The vehicle 710a-710c continues to capture images throughout the ride); storing selected aspects of the journey data to a non-transient data storage ([0051] the generated media (i.e., captured images and/or a highlight reel) is stored in a cloud and associated with a user's account, such that the user has access to the media for a select period of time; [0079]; [0083] the vehicle 710a-710c uploads the images to the cloud 704, where they are saved for a selected period of time. Users may choose to transfer media to a personal cloud space or to download the media to a personal device) in response to a trigger criteria being met during the journey in an associated driving situation ([0009] the sensors is configured to capture a second set of images during the ride, and a central computing system configured to receive the first and second sets of images and link the images with a user account. The first set of images include views outside the vehicle and the second set of images include views inside the vehicle), wherein the selected aspects of the journey data comprise selected audio or video streams of limited length or selected images documenting the associated driving situation ([0024] providing an entertaining and memorable experience on a passenger's first ride along with photos and/or a highlight reel of the ride can help create a brand allegiance; [0025] The captured moments can be combined to create a highlight reel including one or more trips); and generating a selection of personal journey highlights based on the selected aspects stored during the journey for presentation during or after the journey (Fig. 6A, 606; [0076] The first option, the “View Photos” button 606, allows a user to view the captured images. When the “view photos” button 606 is selected, the captured images are displayed individually such that the user can scroll through the images and select favorites; [0078] The second option, the “View Photo Reel” button 608, allows a user to view a highlight reel of captured images). With regard to claim 3, the limitations are addressed above and Stumpf teaches wherein the journey data is retrieved from a personal electronic device of an occupant of the vehicle (Figs. 6A-6B; [0048] an app on the tablet in the vehicle allows users to manually take photos and videos of themselves inside the rideshare vehicle cabin. In some examples, users can use a button on a phone or device interface to instruct the in vehicle camera to capture the photo; [0075] FIGS. 6A and 6B show examples 600, 620 of a device interface for receiving, viewing, and/or sharing captured images, according to some embodiments of the disclosure. In particular, FIG. 6A shows an example 600 of a device 602 showing a rideshare application interface 604). With regard to claim 4, the limitations are addressed above and Stumpf teaches wherein the trigger criteria is set based on pre- established or user defined content criteria ([0009] a central computing system configured to receive the first and second sets of images and link the images with a user account; [0042] Sensors inside the vehicle can be used to detect various cues. For example, one or more microphones in the vehicle can be used to detect laughter, and detection of laughter can trigger image-capturing. In some examples, a user can request a photo vocally, and microphones are used to detect the user request. In some examples, a user may elect to be periodically reminded to pose for a photo, e.g., every ten minutes). With regard to claim 5, the limitations are addressed above and Stumpf teaches wherein the trigger criteria is successively refined based on machine learning algorithms taking user preferences or user interactions into account ([0005] Machine learning can be used to automatically identify special moments during a ride. In some examples, an autonomous vehicle also provides a customizable manual photobooth experience; [0045] machine learning is used to identify moments that are worth capturing images of. In one example, the passengers include two friends who are having a great time and laughing hysterically in the cabin. One or more of speech analysis, detection of high decibel levels, and image analysis to detect a smile, can be used to identify the moment as one of interest and trigger in-vehicle cameras to capture one or more images). With regard to claim 6, the limitations are addressed above and Stumpf teaches wherein the trigger criteria comprises an occurrence of passing of points of interest along a route of the vehicle ([0028] the autonomous vehicle 110 uses sensor information from the sensor suite 102 to determine its location, to navigate traffic, to sense and avoid obstacles, and to sense its surroundings; [0029] sensor suite 102 data can provide localized traffic information. In this way, sensor suite 102 data from many autonomous vehicles can continually provide feedback to the mapping system and the high fidelity map can be updated as more and more information is gathered), notable traffic or driving situations involving the vehicle or happening near the vehicle ([0028] the autonomous vehicle 110 uses sensor information from the sensor suite 102 to determine its location, to navigate traffic, to sense and avoid obstacles, and to sense its surroundings; [0155] Driving behavior may include a description of a controlled operation and movement of an autonomous vehicle and the manner in which the autonomous vehicle applies traffic rules during one or more driving sessions), notable behavior of an occupant of the vehicle ([0035] Driving behavior of an autonomous vehicle may be modified according to explicit input or feedback (e.g., a passenger specifying a maximum speed or a relative comfort level), implicit input or feedback (e.g., a passenger's heart rate), or any other suitable data or manner of communicating driving behavior preferences), notable interactions between multiple occupants of the vehicle, notable interactions between the occupant of the vehicle and the vehicle ([0042] one or more microphones in the vehicle can be used to detect laughter, and detection of laughter can trigger image-capturing; [0045] One or more of speech analysis, detection of high decibel levels, and image analysis to detect a smile, can be used to identify the moment as one of interest and trigger in-vehicle cameras to capture one or more images), trigger signals from a personal electronic device of the occupant ([0050] a user can use the rideshare app to trigger the one or more image capture event while the user (and some friends) strike poses for the photos), or ingress or egress of the occupant of the vehicle. With regard to claim 7, the limitations are addressed above and Stumpf teaches wherein the journey data comprises a driving history of the vehicle ([0029] sensor suite 102 data is used to detect selected events. In particular, data from the sensor suite 102 can be used to update a map with information used to develop layers with waypoints identifying selected events, the locations of the encountered events, and the frequency with which the events are encountered at the identified location), wherein the driving history comprises navigation data, environmental data, driving parameters, or status data of the vehicle ([0029] update a map with information used to develop layers with waypoints identifying selected events, the locations of the encountered events, and the frequency with which the events are encountered at the identified location), and wherein the selected aspects of the journey data are stored together with associated aspects of the driving history ([0029] sensor suite 102 data is used to detect selected events…sensor suite 102 data from many autonomous vehicles can continually provide feedback to the mapping system and the high fidelity map can be updated as more and more information is gathered). With regard to claim 8, the limitations are addressed above and Stumpf teaches wherein the journey data having sensitive content is blocked from storage ([0052] the photos are filtered before being sent to a passenger, to blur or block out other people caught in the pictures and maintain their privacy; [0059] another vehicle may be blocking a view of a photogenic location. In other examples, construction prevents a view of a photogenic location). With regard to claim 9, the limitations are addressed above and Stumpf teaches wherein the journey data comprises external data captured by other vehicles in an area of the vehicle during a respective driving situation ([0005] external sensors to capture a ride's most special moments in an easily shareable highlight reel format; [0032] other interior and/or exterior sensors can be used to detect that a passenger has exited the vehicle; [0055] In general, outside the vehicle, a picture may be captured if the map indicates the vehicle is at a great photo location (such as on a high hill overlooking the city), the weather is clear enough, and vehicle sensors don't indicate that there is an object (such as another vehicle) in the way of the shot; [0066] multiple vehicles can be used, such that if one rideshare vehicle passes another rideshare vehicle, the first rideshare vehicle can take a photo of the second rideshare vehicle) and wherein the method further comprises: wirelessly receiving the external data from the other vehicles ([0034] the onboard computer 104 is connected to the Internet via a wireless connection (e.g., via a cellular data connection). In some examples, the onboard computer 104 is coupled to any number of wireless or wired communication systems; [0082] the vehicles 710a-710c communicate wirelessly with a cloud 704 and a central computer 702) and considering the external data for the selected aspects of the journey data ([0005] external sensors to capture a ride's most special moments in an easily shareable highlight reel format; [0032] other interior and/or exterior sensors can be used to detect that a passenger has exited the vehicle; [0055] In general, outside the vehicle, a picture may be captured if the map indicates the vehicle is at a great photo location (such as on a high hill overlooking the city), the weather is clear enough, and vehicle sensors don't indicate that there is an object (such as another vehicle) in the way of the shot; [0066] multiple vehicles can be used, such that if one rideshare vehicle passes another rideshare vehicle, the first rideshare vehicle can take a photo of the second rideshare vehicle); or wirelessly sharing the selected aspects of the journey data ([0005] external sensors to capture a ride's most special moments in an easily shareable highlight reel format; [0034] the onboard computer 104 is connected to the Internet via a wireless connection (e.g., via a cellular data connection). In some examples, the onboard computer 104 is coupled to any number of wireless or wired communication systems; [0082] the vehicles 710a-710c communicate wirelessly with a cloud 704 and a central computer 702) or data about user preferences or user interactions with the other vehicles ([0005] external sensors to capture a ride's most special moments in an easily shareable highlight reel format; [0032] other interior and/or exterior sensors can be used to detect that a passenger has exited the vehicle; [0055] In general, outside the vehicle, a picture may be captured if the map indicates the vehicle is at a great photo location (such as on a high hill overlooking the city), the weather is clear enough, and vehicle sensors don't indicate that there is an object (such as another vehicle) in the way of the shot; [0066] multiple vehicles can be used, such that if one rideshare vehicle passes another rideshare vehicle, the first rideshare vehicle can take a photo of the second rideshare vehicle). With regard to claim 10, Stumpf teaches a system for automatized selection and documentation of personal highlights during journeys with a vehicle ([abstract] images from inside the vehicle and outside the vehicle, and can be used to create a highlight reel of the ride), the system comprising: a vehicle sensor system (Fig. 1, sensor suite 102) configured to capture journey data continuously during a journey of the vehicle ([abstract] images from inside the vehicle and outside the vehicle, and can be used to create a highlight reel of the ride. The images can be captured automatically and include images of passengers during the ride); and a control device ([0033] the onboard computer 104 controls and/or modifies driving behavior of the autonomous vehicle 110) configured to: access the journey data captured during the journey of the vehicle (Fig. 2, 210; [abstract] images from inside the vehicle and outside the vehicle, and can be used to create a highlight reel of the ride. The images can be captured automatically and include images of passengers during the ride; [0083] The vehicle 710a-710c continues to capture images throughout the ride), the journey data comprising audio, image, or video data taken of an interior or an exterior of the vehicle (Fig. 4; [abstract] images can include images from inside the vehicle and outside the vehicle; [0022] the photos include images captured inside the vehicle with interior cameras); store selected aspects of the journey data to a non-transient data storage ([0051] the generated media (i.e., captured images and/or a highlight reel) is stored in a cloud and associated with a user's account, such that the user has access to the media for a select period of time; [0079]; [0083] the vehicle 710a-710c uploads the images to the cloud 704, where they are saved for a selected period of time. Users may choose to transfer media to a personal cloud space or to download the media to a personal device) in response to a trigger criteria being met during the journey in an associated driving situation ([0009] the sensors is configured to capture a second set of images during the ride, and a central computing system configured to receive the first and second sets of images and link the images with a user account. The first set of images include views outside the vehicle and the second set of images include views inside the vehicle), wherein the selected aspects of the journey data comprise selected audio or video streams of limited length or selected images documenting the associated driving situation ([0024] providing an entertaining and memorable experience on a passenger's first ride along with photos and/or a highlight reel of the ride can help create a brand allegiance; [0025] The captured moments can be combined to create a highlight reel including one or more trips); and generate a selection of personal journey highlights based on the selected aspects stored during the journey for presentation during or after the journey (Fig. 6A, 606; [0076] The first option, the “View Photos” button 606, allows a user to view the captured images. When the “view photos” button 606 is selected, the captured images are displayed individually such that the user can scroll through the images and select favorites; [0078] The second option, the “View Photo Reel” button 608, allows a user to view a highlight reel of captured images). With regard to claim 12, the system claim corresponds to the method claim 3, respectively, and therefore is rejected with the same rationale. With regard to claim 13, the system claim corresponds to the method claim 4, respectively, and therefore is rejected with the same rationale. With regard to claim 14, the system claim corresponds to the method claim 5, respectively, and therefore is rejected with the same rationale. With regard to claim 15, the system claim corresponds to the method claim 6, respectively, and therefore is rejected with the same rationale. With regard to claim 16, the system claim corresponds to the method claim 7, respectively, and therefore is rejected with the same rationale. With regard to claim 17, the system claim corresponds to the method claim 8, respectively, and therefore is rejected with the same rationale. With regard to claim 18, the system claim corresponds to the method claim 9, respectively, and therefore is rejected with the same rationale. With regard to claim 19, Stumpf teaches a vehicle ([abstract] an autonomous vehicle ride) comprising: a vehicle sensor system (Fig. 1, sensor suite 102) configured to capture journey data continuously during a journey of the vehicle ([abstract] images from inside the vehicle and outside the vehicle, and can be used to create a highlight reel of the ride. The images can be captured automatically and include images of passengers during the ride); and a control device ([0033] the onboard computer 104 controls and/or modifies driving behavior of the autonomous vehicle 110) configured to: access the journey data captured during the journey of the vehicle (Fig. 2, 210; [abstract] images from inside the vehicle and outside the vehicle, and can be used to create a highlight reel of the ride. The images can be captured automatically and include images of passengers during the ride; [0083] The vehicle 710a-710c continues to capture images throughout the ride), the journey data comprising audio, image, or video data taken of an interior or an exterior of the vehicle (Fig. 4; [abstract] images can include images from inside the vehicle and outside the vehicle; [0022] the photos include images captured inside the vehicle with interior cameras); store selected aspects of the journey data to a non-transient data storage ([0051] the generated media (i.e., captured images and/or a highlight reel) is stored in a cloud and associated with a user's account, such that the user has access to the media for a select period of time; [0079]; [0083] the vehicle 710a-710c uploads the images to the cloud 704, where they are saved for a selected period of time. Users may choose to transfer media to a personal cloud space or to download the media to a personal device in response to a trigger criteria being met during the journey in an associated driving situation ([0009] the sensors is configured to capture a second set of images during the ride, and a central computing system configured to receive the first and second sets of images and link the images with a user account. The first set of images include views outside the vehicle and the second set of images include views inside the vehicle), wherein the selected aspects of the journey data comprise selected audio or video streams of limited length or selected images documenting the associated driving situation ([0024] providing an entertaining and memorable experience on a passenger's first ride along with photos and/or a highlight reel of the ride can help create a brand allegiance; [0025] The captured moments can be combined to create a highlight reel including one or more trips) and wherein the trigger criteria comprises an occurrence of passing of points of interest along a route of the vehicle ([0028] the autonomous vehicle 110 uses sensor information from the sensor suite 102 to determine its location, to navigate traffic, to sense and avoid obstacles, and to sense its surroundings; [0029] sensor suite 102 data can provide localized traffic information. In this way, sensor suite 102 data from many autonomous vehicles can continually provide feedback to the mapping system and the high fidelity map can be updated as more and more information is gathered), notable traffic or driving situations involving the vehicle or happening in an area of the vehicle ([0028] the autonomous vehicle 110 uses sensor information from the sensor suite 102 to determine its location, to navigate traffic, to sense and avoid obstacles, and to sense its surroundings; [0155] Driving behavior may include a description of a controlled operation and movement of an autonomous vehicle and the manner in which the autonomous vehicle applies traffic rules during one or more driving sessions), notable behavior of an occupant of the vehicle, notable interactions between multiple occupants of the vehicle ([0035] Driving behavior of an autonomous vehicle may be modified according to explicit input or feedback (e.g., a passenger specifying a maximum speed or a relative comfort level), implicit input or feedback (e.g., a passenger's heart rate), or any other suitable data or manner of communicating driving behavior preferences), notable interactions between the occupant of the vehicle and the vehicle ([0042] one or more microphones in the vehicle can be used to detect laughter, and detection of laughter can trigger image-capturing; [0045] One or more of speech analysis, detection of high decibel levels, and image analysis to detect a smile, can be used to identify the moment as one of interest and trigger in-vehicle cameras to capture one or more images), trigger signals from a personal electronic device of the occupant ([0050] a user can use the rideshare app to trigger the one or more image capture event while the user (and some friends) strike poses for the photos), or ingress or egress of the occupant of the vehicle; and generate a selection of personal journey highlights based on the selected aspects stored during the journey for presentation during or after the journey (Fig. 6A, 606; [0076] The first option, the “View Photos” button 606, allows a user to view the captured images. When the “view photos” button 606 is selected, the captured images are displayed individually such that the user can scroll through the images and select favorites; [0078] The second option, the “View Photo Reel” button 608, allows a user to view a highlight reel of captured images). With regard to claim 21, the limitations are addressed above and Stumpf teaches wherein the control device is configured to retrieve the journey data from the personal electronic device of the occupant of the vehicle (Figs. 6A-6B; [0048] an app on the tablet in the vehicle allows users to manually take photos and videos of themselves inside the rideshare vehicle cabin. In some examples, users can use a button on a phone or device interface to instruct the in vehicle camera to capture the photo; [0075] FIGS. 6A and 6B show examples 600, 620 of a device interface for receiving, viewing, and/or sharing captured images, according to some embodiments of the disclosure. In particular, FIG. 6A shows an example 600 of a device 602 showing a rideshare application interface 604). With regard to claim 22, the limitations are addressed above and Stumpf teaches wherein the control device is configured to successively refine the trigger criteria based on machine learning algorithms taking user preferences or user interactions into account ([0005] Machine learning can be used to automatically identify special moments during a ride. In some examples, an autonomous vehicle also provides a customizable manual photobooth experience; [0045] machine learning is used to identify moments that are worth capturing images of. In one example, the passengers include two friends who are having a great time and laughing hysterically in the cabin. One or more of speech analysis, detection of high decibel levels, and image analysis to detect a smile, can be used to identify the moment as one of interest and trigger in-vehicle cameras to capture one or more images). Response to Arguments Applicant’s arguments, filed 12-4-2025, with respect to the arguments have been fully considered and are persuasive. The 35 USC 102 of Ikeda (US 2023/0093446) has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Stumpf et al. (U.S. 2022/0366172). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREA C. LEGGETT whose telephone number is (571)270-7700. The examiner can normally be reached M-F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREA C LEGGETT/Primary Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Jul 10, 2023
Application Filed
Mar 07, 2025
Non-Final Rejection — §102
Jun 13, 2025
Response Filed
Sep 02, 2025
Final Rejection — §102
Nov 25, 2025
Interview Requested
Dec 04, 2025
Response after Non-Final Action
Dec 04, 2025
Applicant Interview (Telephonic)
Dec 08, 2025
Examiner Interview Summary
Jan 06, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578478
Method for Checking the Integrity of GNSS Correction Data Provided without Associated Integrity Information
2y 5m to grant Granted Mar 17, 2026
Patent 12576855
ELECTRONIC DEVICE AND METHOD FOR UPDATING WEATHER INFORMATION BASED ON ACTIVITY STATE OF USER USING THE SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12532148
METHODS, DEVICES, AND SYSTEMS FOR VEHICLE TRACKING
2y 5m to grant Granted Jan 20, 2026
Patent 12530962
SELECTING TRAFFIC ALGORITHMS TO GENERATE TRAFFIC DATA
2y 5m to grant Granted Jan 20, 2026
Patent 12529568
RIDE EXPERIENCE ENHANCEMENTS WITH EXTERNAL SERVICES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
96%
With Interview (+20.7%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 639 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month