Prosecution Insights
Last updated: April 19, 2026
Application No. 18/610,921

SYSTEMS AND METHODS FOR PERFORMING ENHANCED SELF-PARK MANEUVER USING AUDIO SENSOR INPUT

Non-Final OA §103
Filed
Mar 20, 2024
Examiner
YANG, JAMES J
Art Unit
2686
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
3 (Non-Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
78%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
409 granted / 720 resolved
-5.2% vs TC avg
Strong +22% interview lift
Without
With
+21.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
47 currently pending
Career history
767
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 720 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to Applicant’s amendment and request for continued examination filed 02/17/2026. Claims 1-2, 5, 7-13, and 16-20 are currently pending in this application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-6, 8, 10-13, 16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Akotkar et al. (U.S. 10,747,231 B2) in view of Srinivasa et al. (U.S. 2018/0174042 A1). Claim 1, Akotkar teaches: A system for performing maneuvers (Akotkar, Fig. 2, Col. 12, Lines 38-41, The operation of the vehicle includes a parking operation.), comprising: one or more audio sensors (Akotkar, Fig. 2: 201) coupled to a vehicle configured to generate audio sensor data of an environment of the vehicle (Akotkar, Col. 3, Lines 64-67 through Col. 4, Lines 1-7, The plurality of microphones may be distributed around the vehicle 102 to form a substantially 360 scope, i.e. the environment of the vehicle 102.); one or more visual sensors coupled to the vehicle configured to generate visual sensor data of the environment of the vehicle (Akotkar, Col. 4, Lines 26-30, The communication interface 206 receives an audio signal 104 from the microphones 201. The communication interface 206 also receives video camera data.); and a computing device (Akotkar, Fig. 2: 218), comprising a processor (Akotkar, Fig. 5: 502) and a memory (Akotkar, Fig. 5: 504, 506, Col. 8, Lines 30-32, The computing device 500 includes onboard computer 218.), wherein the memory comprises instructions (Akotkar, Fig. 5: 522) that, when executed by the processor, are configured to cause the processor (Akotkar, Col. 9, Lines 38-49) to: cause the vehicle to perform driving operations (Akotkar, Col. 5, Lines 36-43); receive the audio sensor data (Akotkar, Col. 4, Lines 4-15, Audio signal 104 is audio sensor data.) and the visual sensor data (Akotkar, Col. 4, Lines 26-30); calculate a risk evaluation based on the audio sensor data and the visual sensor data (Akotkar, Col. 4, Lines 50-67, The decision unit 220 consolidates and processes information delivered by the plurality of microphones 201 to autonomously or semi-autonomously drive the vehicle 102. The driving of the vehicle 102 may further include data from at least one vision-based device (see Akotkar, Col. 5, Lines 44-49). The classification of the one or more frames of a detected audio signal and the determination of subsequent operating of the vehicle 102 is functionally equivalent to a risk evaluation.); and using a neural network, generate a confidence score based on the risk evaluation (Akotkar, Col. 5, Lines 1-5, The probability score is used to determine whether an associated audio signal includes an emergency alarm signal. The probability score is received from the Deep Neural Network (DNN) and is based on a classification of received audio signals.), the confidence score indicating a confidence that the vehicle is safely parked using a function (Akotkar, Col. 5, Lines 1-57, The probability score, in combination with the vehicle’s navigation control system, is used to determine when and how the vehicle should respond to a detected alarm signal. For example, the approximate location of the emergency vehicle can be identified and the vehicle can determine whether or not to pull over because of the presence of an emergency vehicle. It is within the scope of the teachings of Akotkar, for the vehicle to determine its current position to be safely out of the way of the path of the emergency vehicle, wherein the probability score, which is indicative of the presence of the emergency vehicle, is subsequently used to indicate whether the vehicle is not impeding the emergency vehicle. The current position of the vehicle, if stopped, would be functionally equivalent to a safely parked position.); and wherein generating the confidence score comprises: calculating the confidence score above or below a first threshold (Akotkar, Col. 5, Lines 1-5). Akotkar does not specifically teach: Enhanced self-park maneuvers and a remote smart parking assist (RSPA) function to self-park the vehicle; calculating the confidence score to be low when the confidence score is below a first threshold; calculating the confidence score as medium when the confidence score is above the first threshold and below a second threshold; and calculating the confidence score as high when the confidence score is above the second threshold. when the confidence score is low, performing: terminating the RSPA function; and returning control of the vehicle to a driver; when the confidence score is medium, performing: proceeding with the RSPA function with implementation of one or more cautionary functions; and when the confidence score is high, performing: proceeding with completion of the RSPA function. However, it would have been obvious to one of ordinary skill in the art, at the time of filing, for the system of Akotkar to be capable of parking the vehicle 102 in response to an alarm signal. Akotkar discloses, in response to a detected alarm signal, that the vehicle 102 is capable of pulling over to side of the road (see Akotkar, Col. 5, Lines 54-58). Additionally, Akotkar discloses the ability for the vehicle 102 to park (see Akotkar, Col. 12, Lines 38-41) and that the vehicle 102 includes a computer-aided driving system 101 capable of semi-autonomous or autonomous driving (see Akotkar, Col. 3, Lines 16-23). Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, for the pulling over and the parking of the vehicle 102 to be functionally equivalent to enhanced self-park maneuvers and a remote smart parking assist (RSPA) function, which is interpreted as a semi-autonomous or autonomous parking assist function. Srinivasa teaches: Calculating the confidence score to be low when the confidence score is below a first threshold (Srinivasa, Paragraph [0055], The second threshold value is lower than a first threshold value, which represents samples that have a very low prediction score, e.g. confidence level.); calculating the confidence score as medium when the confidence score is above the first threshold and below a second threshold (Srinivasa, Paragraphs [0054-0055], Excluded samples 815 whose prediction scores, e.g. confidence levels, were lower than the first threshold value but higher than the second threshold value are not included in excluded samples 825.); and calculating the confidence score as high when the confidence score is above the second threshold (Srinivasa, Paragraph [0054], Samples that had prediction scores, e.g. confidence levels, not below the first threshold value are successfully labeled and thus not included in excluded samples 815.). Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Akotkar by introducing a plurality of threshold values, as taught by Srinivasa. The motivation would be to integrate an accurate but less complex method of training a neural network (see Srinivasa, Paragraph [0003]). Akotkar in view of Srinivasa further teaches: When the confidence score is low, the one or more suitable actions comprise: terminating the RSPA function; and returning control of the vehicle to a driver; when the confidence score is medium, the one or more suitable actions comprise: proceeding with the RSPA function with implementation of one or more cautionary functions; and when the confidence score is high, the one or more suitable actions comprise: proceeding with completion of the RSPA function (Akotkar, Col. 5, Lines 54-58, Col. 12, Lines 38-41, The operation of the vehicle includes a parking operation. When the probability score is indicative of an emergency alarm signal, i.e. high, the vehicle 102 might slow down or pull over to the side of the road, i.e. park. The Examiner notes that claim 3 recites 3 scenarios, i.e. confidence score is low, medium, or high, which are in the alternative form because a confidence score cannot be simultaneously low, medium, and high. Therefore, only the one or more suitable actions corresponding to the particular scenario, e.g. low, medium, or high are required. For purposes of examination, the claims are interpreted in light of a confidence score being high.). Claim 2, Akotkar in view of Srinivasa further teaches: The system of claim 1, wherein calculating the risk evaluation comprises training the neural network according to a training feedback loop (Akotkar, Col. 4, Lines 6-19, A plurality of different types of training may be used. An example that utilizes a feedback loop includes a recurrent neural network (RNN).). Claim 5, Akotkar in view of Srinivasa further teaches: The system of claim 1, wherein the one or more cautionary functions comprise one or more of the following: reducing a speed of the vehicle (Akotkar, Col. 5, Lines 54-58, Pulling over of the vehicle effectively reduces the speed of the vehicle. It would have been obvious to one of ordinary skill in the art, at the time of filing, in the combination of Akotkar in view of Srinivasa, for the emergency vehicle identified by the DNN to be above the second threshold but below the first threshold of Srinivasa (see Srinivasa, Paragraphs [0054-0055]), thereby being equivalent to having a confidence score of medium. Thus, the pulling over of the vehicle 102 effectively proceeds with the parking procedure (see Akotkar, Col. 5, Lines 54-58).); turning on headlights of the vehicle; turning on hazard lights of the vehicle; increasing a sensor sampling rate of the one or more audio sensors; or increasing a sensor sampling rate of the one or more visual sensors. Claim 8, Akotkar in view of Srinivasa further teaches: The system of claim 1, wherein the calculating the risk evaluation comprises analyzing the audio sensor data to: identify a vehicle horn sound from the audio sensor data to determine one or more characteristics of the vehicle horn sound (Akotkar, Col. 5, Lines 1-5 and 36-49, The emergency alarm signal includes an emergency vehicle, and it would have been obvious to one of ordinary skill in the art, at the time of filing, for the alarm signals associated with an emergency vehicle to include the horn of the emergency vehicle.); based on the one or more characteristics, match the vehicle horn sound to a vehicle model (Akotkar, Col. 5, Lines 1-5 and 36-49, The vehicle model is an emergency vehicle, e.g. emergency vehicle 106 of Fig. 1.); determine whether one or more sounds from the audio sensor data belong to one or more animals or humans (Akotkar, Col. 7, Lines 55-61, The DNN is trained with animal sounds, baby and child sounds, and adult sounds.); determine, based on one or more sound characteristics, whether one or more sounds from the audio sensor data are generated from one or more objects that are approaching the vehicle (Akotkar, Col. 5, Lines 12-17); and determine, based on one or more sound characteristics, whether one or more sounds from the audio sensor data are generated from one or more objects that are departing from the vehicle (Akotkar, Col. 5, Lines 12-17 and 36-53, The vehicle 102 is able to determine the approximate location of the emergency vehicle. Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, for the vehicle 102 to be capable of determining whether the emergency vehicle is approaching or not approaching, e.g. departing, the vehicle 102 based on the known location of the emergency vehicle.). Claim 10, Akotkar in view of Srinivasa further teaches: The system of claim 1, wherein the calculating the risk evaluation comprises analyzing the visual sensor data and the audio sensor data to match a horn sound to a visual detection of a secondary vehicle (Akotkar, Col. 5, Lines 1-5 and 36-49, The emergency alarm signal includes an emergency vehicle, and it would have been obvious to one of ordinary skill in the art, at the time of filing, for the alarm signals associated with an emergency vehicle to include the horn of the emergency vehicle. Thus, when utilizing the vision-based camera (see Akotkar, Col. 5, Lines 44-53), an alarm signal would be represented by the emergency vehicle identified via the received sound and an image of the emergency vehicle captured by the vision-based camera.). Claim 11, Akotkar in view of Srinivasa further teaches: The system of claim 1, further comprising the vehicle, wherein the vehicle comprises: an autonomous vehicle; or a semi-autonomous vehicle (Akotkar, Col. 2, Lines 48-52). Claim 12, Akotkar teaches: A method for performing maneuvers (Akotkar, Fig. 2, Col. 12, Lines 38-41, The operation of the vehicle includes a parking operation.), comprising: generating audio sensor data of an environment of a vehicle (Akotkar, Col. 3, Lines 64-67 through Col. 4, Lines 1-7, The plurality of microphones may be distributed around the vehicle 102 to form a substantially 360 scope, i.e. the environment of the vehicle 102.) via one or more audio sensors coupled to the vehicle (Akotkar, Fig. 2: 201); generating visual sensor data of an environment of the vehicle via one or more visual sensors coupled to the vehicle (Akotkar, Col. 4, Lines 26-30, The communication interface 206 receives an audio signal 104 from the microphones 201. The communication interface 206 also receives video camera data.); and using a computing device (Akotkar, Fig. 2: 218), comprising a processor (Akotkar, Fig. 5: 502) and a memory (Akotkar, Fig. 5: 504, 506, Col. 8, Lines 30-32, The computing device 500 includes onboard computer 218.), receiving the audio sensor data (Akotkar, Col. 4, Lines 4-15, Audio signal 104 is audio sensor data.) and the visual sensor data (Akotkar, Col. 4, Lines 26-30); calculating a risk evaluation based on the audio sensor data and the visual sensor data (Akotkar, Col. 4, Lines 50-67, The decision unit 220 consolidates and processes information delivered by the plurality of microphones 201 to autonomously or semi-autonomously drive the vehicle 102. The driving of the vehicle 102 may further include data from at least one vision-based device (see Akotkar, Col. 5, Lines 44-49). The classification of the one or more frames of a detected audio signal and the determination of subsequent operating of the vehicle 102 is functionally equivalent to a risk evaluation.); using a neural network, generating a confidence score based on the risk evaluation (Akotkar, Col. 5, Lines 1-5, The probability score is used to determine whether an associated audio signal includes an emergency alarm signal. The probability score is received from the Deep Neural Network (DNN) and is based on a classification of received audio signals.), the confidence score indicating a confidence that the vehicle is safely parked using a function (Akotkar, Col. 5, Lines 1-57, The probability score, in combination with the vehicle’s navigation control system, is used to determine when and how the vehicle should respond to a detected alarm signal. For example, the approximate location of the emergency vehicle can be identified and the vehicle can determine whether or not to pull over because of the presence of an emergency vehicle. It is within the scope of the teachings of Akotkar, for the vehicle to determine its current position to be safely out of the way of the path of the emergency vehicle, wherein the probability score, which is indicative of the presence of the emergency vehicle, is subsequently used to indicate whether the vehicle is not impeding the emergency vehicle. The current position of the vehicle, if stopped, would be functionally equivalent to a safely parked position.); wherein generating the confidence score comprises: calculating the confidence score above or below a first threshold (Akotkar, Col. 5, Lines 1-5). Akotkar does not specifically teach: Enhanced self-park maneuvers and a remote smart parking assist (RSPA) function; calculating the confidence score to be low when the confidence score is below a first threshold; calculating the confidence score as medium when the confidence score is above the first threshold and below a second threshold; and calculating the confidence score as high when the confidence score is above the second threshold, and when the confidence score is low, performing: terminating a remote smart parking assist (RSPA) function; and returning control of the vehicle to a driver; when the confidence score is medium, performing: proceeding with the RSPA function with implementation of one or more cautionary functions; and when the confidence score is high, performing: performing the completion of the RSPA function. However, it would have been obvious to one of ordinary skill in the art, at the time of filing, for the system of Akotkar to be capable of parking the vehicle 102 in response to an alarm signal. Akotkar discloses, in response to a detected alarm signal, that the vehicle 102 is capable of pulling over to side of the road (see Akotkar, Col. 5, Lines 54-58). Additionally, Akotkar discloses the ability for the vehicle 102 to park (see Akotkar, Col. 12, Lines 38-41) and that the vehicle 102 includes a computer-aided driving system 101 capable of semi-autonomous or autonomous driving (see Akotkar, Col. 3, Lines 16-23). Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, for the pulling over and the parking of the vehicle 102 to be functionally equivalent to enhanced self-park maneuvers and a remote smart parking assist (RSPA) function, which is interpreted as a semi-autonomous or autonomous parking assist function. Srinivasa teaches: Calculating the confidence score to be low when the confidence score is below a first threshold (Srinivasa, Paragraph [0055], The second threshold value is lower than a first threshold value, which represents samples that have a very low prediction score, e.g. confidence level.); calculating the confidence score as medium when the confidence score is above the first threshold and below a second threshold (Srinivasa, Paragraphs [0054-0055], Excluded samples 815 whose prediction scores, e.g. confidence levels, were lower than the first threshold value but higher than the second threshold value are not included in excluded samples 825.); and calculating the confidence score as high when the confidence score is above the second threshold (Srinivasa, Paragraph [0054], Samples that had prediction scores, e.g. confidence levels, not below the first threshold value are successfully labeled and thus not included in excluded samples 815.). Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Akotkar by introducing a plurality of threshold values, as taught by Srinivasa. The motivation would be to integrate an accurate but less complex method of training a neural network (see Srinivasa, Paragraph [0003]). Akotkar in view of Srinivasa further teaches: When the confidence score is low, the one or more suitable actions comprise: terminating a remote smart parking assist (RSPA) function; and returning control of the vehicle to a driver; when the confidence score is medium, the one or more suitable actions comprise: proceeding with the RSPA function with implementation of one or more cautionary functions; and when the confidence score is high, the one or more suitable actions comprise: performing the completion of the RSPA function (Akotkar, Col. 5, Lines 54-58, Col. 12, Lines 38-41, The operation of the vehicle includes a parking operation. When the probability score is indicative of an emergency alarm signal, i.e. high, the vehicle 102 might slow down or pull over to the side of the road, i.e. park. The Examiner notes that claim 3 recites 3 scenarios, i.e. confidence score is low, medium, or high, which are in the alternative form because a confidence score cannot be simultaneously low, medium, and high. Therefore, only the one or more suitable actions of claim 4 corresponding to the particular scenario, e.g. low, medium, or high are required. For purposes of examination, the claims are interpreted in light of a confidence score being high.). Claim 13, Akotkar in view of Srinivasa further teaches: The method of claim 12, wherein calculating the risk evaluation comprises training the neural network according to a training feedback loop (Akotkar, Col. 4, Lines 6-19, A plurality of different types of training may be used. An example that utilizes a feedback loop includes a recurrent neural network (RNN).). Claim 16, Akotkar in view of Srinivasa further teaches: The method of claim 15, wherein the one or more cautionary functions comprise one or more of the following: reducing a speed of the vehicle (Akotkar, Col. 5, Lines 54-58, Pulling over of the vehicle effectively reduces the speed of the vehicle. It would have been obvious to one of ordinary skill in the art, at the time of filing, in the combination of Akotkar in view of Srinivasa, for the emergency vehicle identified by the DNN to be above the second threshold but below the first threshold of Srinivasa (see Srinivasa, Paragraphs [0054-0055]), thereby being equivalent to having a confidence score of medium. Thus, the pulling over of the vehicle 102 effectively proceeds with the parking procedure (see Akotkar, Col. 5, Lines 54-58).); turning on headlights of the vehicle; turning on hazard lights of the vehicle; increasing a sensor sampling rate of the one or more audio sensors; or increasing a sensor sampling rate of the one or more visual sensors. Claim 18, Akotkar in view of Srinivasa further teaches: The method of claim 12, wherein the calculating the risk evaluation comprises analyzing the audio sensor data to: identify a vehicle horn sound from the audio sensor data to determine one or more characteristics of the vehicle horn sound (Akotkar, Col. 5, Lines 1-5 and 36-49, The emergency alarm signal includes an emergency vehicle, and it would have been obvious to one of ordinary skill in the art, at the time of filing, for the alarm signals associated with an emergency vehicle to include the horn of the emergency vehicle.); based on the one or more characteristics, match the vehicle horn sound to a vehicle model (Akotkar, Col. 5, Lines 1-5 and 36-49, The vehicle model is an emergency vehicle, e.g. emergency vehicle 106 of Fig. 1.); determine whether one or more sounds from the audio sensor data belong to one or more animals or humans (Akotkar, Col. 7, Lines 55-61, The DNN is trained with animal sounds, baby and child sounds, and adult sounds.); determine, based on one or more sound characteristics, whether one or more sounds from the audio sensor data are generated from one or more objects that are approaching the vehicle (Akotkar, Col. 5, Lines 12-17); and determine, based on one or more sound characteristics, whether one or more sounds from the audio sensor data are generated from one or more objects that are departing from the vehicle (Akotkar, Col. 5, Lines 12-17 and 36-53, The vehicle 102 is able to determine the approximate location of the emergency vehicle. Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, for the vehicle 102 to be capable of determining whether the emergency vehicle is approaching or not approaching, e.g. departing, the vehicle 102 based on the known location of the emergency vehicle.). Claim 20, Akotkar in view of Srinivasa further teaches: The method of claim 12, wherein the calculating the risk evaluation comprises analyzing the visual sensor data and the audio sensor data to match a horn sound to a visual detection of a secondary vehicle (Akotkar, Col. 5, Lines 1-5 and 36-49, The emergency alarm signal includes an emergency vehicle, and it would have been obvious to one of ordinary skill in the art, at the time of filing, for the alarm signals associated with an emergency vehicle to include the horn of the emergency vehicle. Thus, when utilizing the vision-based camera (see Akotkar, Col. 5, Lines 44-53), an alarm signal would be represented by the emergency vehicle identified via the received sound and an image of the emergency vehicle captured by the vision-based camera.). Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Akotkar et al. (U.S. 10,747,231 B2) in view of Srinivasa et al. (U.S. 2018/0174042 A1), in view of Merai et al. (U.S. 2019/0171897 A1). Claim 7, Akotkar in view of Srinivasa teaches: The system of claim 1, wherein the DNN is trained to identify audio of emergency vehicles (Akotkar, Col. 5, Lines 1-5) and humans (Akotkar, Col. 7, Lines 55-61). Akotkar in view of Srinivasa does not specifically teach: Wherein the calculating the risk evaluation comprises analyzing the visual sensor data to: determine whether one or more humans and/or animals are present within the visual sensor data; and determine whether one or more vehicles are present within the visual sensor data. Merai teaches: Object recognition in an image with a confidence score (Merai, Paragraph [0051]). Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Akotkar in view of Srinivasa by integrating the teaching of object recognition of images from a camera, as taught by Merai. The motivation would be to improve the gathering of data by utilizing an improved machine learning process (see Merai, Paragraph [0055]). Claim 17, Akotkar in view of Srinivasa further teaches: The method of claim 12, wherein the DNN is trained to identify audio of emergency vehicles (Akotkar, Col. 5, Lines 1-5) and humans (Akotkar, Col. 7, Lines 55-61). Akotkar in view of Srinivasa does not specifically teach: Wherein the calculating the risk evaluation comprises analyzing the visual sensor data to: determine whether one or more humans and/or animals are present within the visual sensor data; and determine whether one or more vehicles are present within the visual sensor data. Merai teaches: Object recognition in an image with a confidence score (Merai, Paragraph [0051]). Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Akotkar in view of Srinivasa by integrating the teaching of object recognition of images from a camera, as taught by Merai. The motivation would be to improve the gathering of data by utilizing an improved machine learning process (see Merai, Paragraph [0055]). Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Akotkar et al. (U.S. 10,747,231 B2) in view of Srinivasa et al. (U.S. 2018/0174042 A1), in view of Greene (U.S. 2016/0134785 A1). Claim 9, Akotkar in view of Srinivasa further teaches: The system of claim 1. Akotkar in view of Srinivasa does not specifically teach: Wherein the calculating the risk evaluation comprises analyzing the visual sensor data and the audio sensor data to match speech to a visual detection of lip movement. Greene teaches: Match speech to a visual detection of lip movement (Greene, Paragraph [0062]). Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Akotkar in view of Srinivasa by integrating the teaching of a video and audio processing system, as taught by Greene. The motivation would be to ensure the synchronization of both the audio and video data (see Greene, Paragraph [0003]). Claim 19, Akotkar in view of Srinivasa further teaches: The method of claim 12. Akotkar in view of Srinivasa does not specifically teach: Wherein the calculating the risk evaluation comprises analyzing the visual sensor data and the audio sensor data to match speech to a visual detection of lip movement. Greene teaches: Match speech to a visual detection of lip movement (Greene, Paragraph [0062]). Therefore, it would have been obvious to one of ordinary skill in the art, at the time of filing, to modify the system in Akotkar in view of Srinivasa by integrating the teaching of a video and audio processing system, as taught by Greene. The motivation would be to ensure the synchronization of both the audio and video data (see Greene, Paragraph [0003]). Response to Arguments Applicant's arguments filed 02/17/2026 have been fully considered but they are not persuasive. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In response to the Applicant’s arguments regarding the Applicant’s claimed amendments, the Examiner respectfully disagrees for the reasons set forth in the rejection above. Additionally, in response to the Applicant’s argument on Page 11 that the cited references are unrelated to the system’s ability to safely park a vehicle, the claims, as currently amended, do not inherently or explicitly define this aspect of the Applicant’s invention away from the interpretation in the rejection above. For instance, claim 1 defines that the confidence score indicates “a confidence that the vehicle is safely parked using the RSPA function”. “Parked” indicates that the vehicle has already been parked, and the claimed “RSPA function” does not specifically define the RSPA function to exclusively include the steps of physically parking/moving the vehicle. For example, the steps of determining how to proceed with a parking operation may also be interpreted as being part of a remote smart parking assist function. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES J YANG whose telephone number is (571)270-5170. The examiner can normally be reached 9:30am-6:00p M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN ZIMMERMAN can be reached at (571) 272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES J YANG/Primary Examiner, Art Unit 2686
Read full office action

Prosecution Timeline

Mar 20, 2024
Application Filed
Jul 11, 2025
Non-Final Rejection — §103
Oct 15, 2025
Response Filed
Nov 13, 2025
Final Rejection — §103
Feb 17, 2026
Request for Continued Examination
Feb 22, 2026
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602812
MITIGATING EFFECTS CAUSED BY REPEATED AND/OR SPORADIC MOVEMENT OF OBJECTS IN A FIELD OF VIEW
2y 5m to grant Granted Apr 14, 2026
Patent 12604164
SYSTEM AND METHODS FOR HYDROGEN PLANT CONDITION MONITORING USING A WIRELESS MODULAR SENSOR SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12579886
SYSTEM AND METHOD FOR USING V2X AND SENSOR DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12570210
CONTROL APPARATUS FOR VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12564526
BED HAVING SENSOR FUSING FEATURES USEFUL FOR DETERMINING SNORE AND BREATHING PARAMETERS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
78%
With Interview (+21.5%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 720 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month