Prosecution Insights
Last updated: April 19, 2026
Application No. 18/348,812

METHOD AND SYSTEM FOR A PERSONALIZED VEHICLE INTERACTIVE MUSIC SYSTEM

Final Rejection §103
Filed
Jul 07, 2023
Examiner
TESHALE, AKELAW
Art Unit
2694
Tech Center
2600 — Communications
Assignee
Ford Global Technologies LLC
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
687 granted / 834 resolved
+20.4% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
33 currently pending
Career history
867
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
35.4%
-4.6% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 834 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment This action is in response to the communication filed on 08/15/2025. Claims 1-6,8,10-18 and 20-21 are pending in this action. Claims 7,9 and 19 have been cancelled. Claim 21 has been added as a new claim. This action is final. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1,3-6,8, 10 ,12,14-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S Pub. No. 2019/0212160 A1 to Kennedy et al. (hereinafter “Kennedy”) in view of U.S Patent No. 10,332,495 B1 to Mortensen et al. (hereinafter “Mortensen”). Regarding claim 1, Kennedy teaches a method for providing an interactive music system in a vehicle, comprising: storing a profile record associated with a user as a learned travel behavior of the user (previous listening behavior can be while traveling, not while traveling, or both), the profile record including information related to at least one of a music preference profile associated with the user and a usage time at which the user employs an interactive mode of the interactive music system (paragraphs [0047] and [0084] and [0131]; media content can be selected for playback without user input based on stored user profile information, location, travel conditions, current events, and other criteria…preferences may include whether the user prefers music over podcasts or audiobooks. Other preferences may include preferred genres of music, preferred musical artists, preferred songs, preferred genres of audiobooks, preferred authors of audiobooks, preferred types of news content, the user's listening preferences based on time of day, preferred types of videos, preferred sports teams, preferred amounts of news and entertainment content, preferred geographic area, and preferred types of content); estimating a travel time of the vehicle based at least on a learned travel behavior of a user of the vehicle (paragraphs [0131] [0132] and [0136]; The duration 1504 indicates how long each segment 1508 is expected to take to travel. The media selection 1506 indicates the type of media content that is played for each segment 1508 of the route. In other words, estimate session duration (e.g., commute length) based on learned user behaviour (e.g., Past listening time, travel patterns); obtaining data related to at least one drive condition (Fig.18, paragraphs [0082] and [0139]; the travel server application 186 may store metadata and other information that associates media content items with geographic locations, forms of travel, route conditions, etc.); defining a playlist based on the travel time and the music preference profile of the user in response to the travel time being within a duration threshold and the at least one drive condition being satisfied (Fig.15, Fig.18, paragraphs [0082], [0099], [0136] and [0139]; The duration 1504 indicates how long each segment 1508 is expected to take to travel. The media selection 1506 indicates the type of media content that is played for each segment 1508 of the route. The diagram 1500 shows an example program customized for a user driving from downtown Minneapolis to the Mall of America. In this example, the route begins with 3 minutes of driving on city streets with low traffic in good weather. During this segment an initial greeting is played for the user which includes the expected trip duration along with traffic and weather updates. Then the user travels on an interstate highway in moderate traffic with good weather for 8 minutes. During this time a song is played, followed by news headlines, and another song. Then, for 4 minutes, the user drives in a commercial area with heavy traffic in good weather); outputting a sound based on an audio signal indicative of a song from the playlist in the interactive mode using one or more speakers in the vehicle in response to at least one of the at least one drive condition being satisfied or a current time being the usage time (Fig.15, Fig.18, paragraphs [0082], [0099], [0136] and [0139]; The duration 1504 indicates how long each segment 1508 is expected to take to travel. The media selection 1506 indicates the type of media content that is played for each segment 1508 of the route. The diagram 1500 shows an example program customized for a user driving from downtown Minneapolis to the Mall of America. In this example, the route begins with 3 minutes of driving on city streets with low traffic in good weather. During this segment an initial greeting is played for the user which includes the expected trip duration along with traffic and weather updates. Then the user travels on an interstate highway in moderate traffic with good weather for 8 minutes. During this time a song is played, followed by news headlines, and another song. Then, for 4 minutes, the user drives in a commercial area with heavy traffic in good weather). Further, Kennedy discloses a user interface displayed on the media-playback device (Fig.2 and Fig.16). However, Kennedy does not teach displaying lyrics of the song on a display of the vehicle during the interactive mode. In the same field of endeavor, Mortensen discloses displaying lyrics of the song on a display of the vehicle during the interactive mode (column 5, lines 40-53; displays 136a-136c can be used to show lyrics of a selected song. For some embodiments, other displays may also display lyrics of a song, video of a person singing a song, or other features of a karaoke experience). At the time of the effective filing date of the invention, it would have been obvious to a person of ordinary skilled in the art to modify Kennedy’s teaching with a feature of displaying lyrics of the song on a display of the vehicle during the interactive mode as taught by Mortensen in order to provide opportunities for users to practice singing and performing various songs as a lead singer while the music plays (column 1, lines 11-13; Mortensen). Regarding claim 3, Kennedy teaches the method of claim 1, wherein the at least one drive condition includes at least one of a road condition, a traffic condition, or a weather condition (paragraphs [0126] and [0136]; The route conditions 1502 indicate the type of road, level of traffic, and weather conditions for each segment 1508 of the route). Regarding claim 4, Kennedy teaches the method of claim 3, wherein: the at least one drive condition includes each of the road condition, the traffic condition, and the weather condition (paragraphs [0126] and [0136]; The route conditions 1502 indicate the type of road, level of traffic, and weather conditions for each segment 1508 of the route), and each of the at least one drive condition is to be satisfied prior to providing the interactive mode (paragraphs [0126] and [0136]; the route conditions 1502 indicate the type of road, level of traffic, and weather conditions for each segment 1508 of the route. The duration 1504 indicates how long each segment 1508 is expected to take to travel. The media selection 1506 indicates the type of media content that is played for each segment 1508 of the route. The diagram 1500 shows an example program customized for a user driving from downtown Minneapolis to the Mall of America. In this example, the route begins with 3 minutes of driving on city streets with low traffic in good weather. During this segment an initial greeting is played for the user which includes the expected trip duration along with traffic and weather updates. Then the user travels on an interstate highway in moderate traffic with good weather for 8 minutes). Regarding claim 5, Kennedy teaches the method of claim 1, further comprising: monitoring the travel time to a destination after defining the playlist; and adjusting the playlist in response to the travel time changing (paragraphs [0167] and [0168]; The modified program 2504 shows how the media program is modified in response to a detour 2802 that increases the duration of the route. In this example, the detour required the program to be extended. Two added songs 2804 are inserted into the media program, while the rest of the program remains the same). Regarding claim 6, Kennedy does not teach the method of claim 1, wherein in the interactive mode, the audio signal is indicative of instrumental portion of the song. In the same field of endeavor, Mortensen discloses wherein in the interactive mode, the audio signal is indicative of instrumental portion of the song (column 10, lines 24-31; entertainment system plays the selected song and recorded audio. The song may be a recording with vocals removed that is streamed from a remote data source or from a local data storage. The user's recorded vocals can then be played along with the selected song to provide an experience of being the singer in the song. The combined audio can be played over any number of speakers in the vehicle as set by the users.). At the time of the effective filing date of the invention, it would have been obvious to a person of ordinary skilled in the art to modify Kennedy’s teaching with a feature of wherein in the interactive mode, the audio signal is indicative of instrumental portion of the song as taught by Mortensen in order to provide an experience of being the singer in the song (column 10, lines 24-31; Mortensen). Regarding claim 8, Kennedy does not teach the method of claim 1, further comprising identifying an occupant in the vehicle, wherein the playlist is defined based on the music preference profile associated with the user in response to the identified occupant being the user. In the same field of endeavor, Mortensen discloses identifying an occupant in the vehicle, wherein the playlist is defined based on the music preference profile associated with the user in response to the identified occupant being the user (column 4, lines 25-47; user identification system 122 is coupled to one or more user ID sensors 124, and together they are used to identify authorized users of system 100… User ID sensors 124 can be positioned inside or outside the vehicle e.g., FIGS. 3A-3C and used to identify the driver and/or passengers in the vehicle, all of whom can be users of interactive mapping system 100. The user identification system 122 can be used to sign into user account 104. For example, the user identification system 122 can log one or more users in the vehicle into a karaoke system. The user account 104 can access playlists, previous performances, recordings, selected songs, or the like from one of remote user data sources 102a-102c in order to provide an in-vehicle karaoke personalized experience). At the time of the effective filing date of the invention, it would have been obvious to a person of ordinary skilled in the art to modify Kennedy’s teaching with a feature of identifying an occupant in the vehicle, wherein the playlist is defined based on the music preference profile associated with the user in response to the identified occupant being the user as taught by Mortensen in order to provide an in-vehicle karaoke personalized experience (column 4, lines 25-47; Mortensen). Regarding claim 10, Kennedy does not teach the method of claim 1, further comprising, in the interactive mode: providing the audio signal with a vocal portion of the song in response to receiving a vocal support request; and providing the audio signal with only an instrumental portion of the song absent the vocal support request. In the same field of endeavor, Mortensen discloses providing the audio signal with a vocal portion of the song in response to receiving a vocal support request; and providing the audio signal with only an instrumental portion of the song absent the vocal support request (column 10, lines 24-31; entertainment system plays the selected song and recorded audio. The song may be a recording with vocals removed that is streamed from a remote data source or from a local data storage. The user's recorded vocals can then be played along with the selected song to provide an experience of being the singer in the song. The combined audio can be played over any number of speakers in the vehicle as set by the users.). At the time of the effective filing date of the invention, it would have been obvious to a person of ordinary skilled in the art to modify Kennedy’s teaching with a feature of providing the audio signal with a vocal portion of the song in response to receiving a vocal support request; and providing the audio signal with only an instrumental portion of the song absent the vocal support request as taught by Mortensen in order to provide an experience of being the singer in the song (column 10, lines 24-31; Mortensen). Claims 12,14-18 and 20 are a system claims correspond to method claims 1,3-6,8 and 10. Therefore, claims 12,14-18 and 20 have been analyzed and rejected based on method claims 1,3-6,8 and 10. Claims 2 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over U.S Pub. No. 2019/0212160 A1 to Kennedy et al. (hereinafter “Kennedy”) in view of U.S Patent No. 10,332,495 B1 to Mortensen et al. (hereinafter “Mortensen”) in further view of U.S Pub. No. 2009/0319172 A1 to Almeida et al. (hereinafter “Almeida”). Regarding claim 2, Kennedy teaches the method of claim 1, wherein the estimating the travel time of the vehicle further comprises: obtaining data indicative of a current travel state of the vehicle, the current travel state including at least one of the current time, a current location of the vehicle, or a drive state of the vehicle (paragraphs [0099] and [0125]; determining engine 404 operates to determine the current location of the media-playback device. The location determining engine 404 communicates with the location-determining device 150 of FIG. 2 to determine the location of the user. The current location of the user may be used during travel along the route to determine if the user is ahead or behind of schedule based on the predicted route). However, Kennedy in view Mortensen do not teach identifying a potential destination of the user based on the current travel state and a travel destination predictor defined based on previous destinations traveled by the user, wherein the travel time is estimated based on the potential destination. In the same field of endeavor, Almeida discloses identifying a potential destination of the user based on the current travel state and a travel destination predictor defined based on previous destinations traveled by the user, wherein the travel time is estimated based on the potential destination (Abstract and paragraph [0119]; route timing computation module 1360 takes into consideration historical route information, from the historical route database and to the social network manager 1358. A travel time computation e.g., as determined by the route timing computation module 1360 is typically a prediction using a number of factors). At the time of the effective filing date of the invention, it would have been obvious to a person of ordinary skilled in the art to modify Kennedy and Mortensen teaching with a feature of identifying a potential destination of the user based on the current travel state and a travel destination predictor defined based on previous destinations traveled by the user, wherein the travel time is estimated based on the potential destination as taught by Almeida in order to use historical route to reduce travel time, and decreases the number of inquiries needed to update travel time predictions (paragraph [0004], Almeida). Claim 13 is a system claim correspond to method claim 2. Therefore, claim 13 has been analyzed and rejected based on method claim 2. Claims 11 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over U.S Pub. No. 2019/0212160 A1 to Kennedy et al. (hereinafter “Kennedy”) in view of U.S Patent No. 10,332,495 B1 to Mortensen et al. (hereinafter “Mortensen”) in further view of U.S Patent No. 7,928,307 B2 to Hetherington et al. (hereinafter “Hetherington”). Regarding claim 11, Kennedy in view of Mortensen do not teach the method of claim 1, further comprising, in the interactive mode: detecting whether the user is singing with the song; providing the audio signal with a vocal portion of the song in response to the user not singing with the song; and providing the audio signal without the vocal portion of the song in response to the user singing with the song. In the same field of endeavor, Hetherington discloses interactive mode: detecting whether the user is singing with the song; providing the audio signal with a vocal portion of the song in response to the user not singing with the song; and providing the audio signal without the vocal portion of the song in response to the user singing with the song (column 2, lines 3-33; When the system is activated, the singer sings along to the music and the vocal track in the music is automatically attenuated whenever the person sings. As long as the person is singing, the automatic attenuation is invoked. If the person stops singing then the vocal track returns). At the time of the effective filing date of the invention, it would have been obvious to a person of ordinary skilled in the art to modify Kennedy and Mortensen teaching with a feature of detecting whether the user is singing with the song; providing the audio signal with a vocal portion of the song in response to the user not singing with the song; and providing the audio signal without the vocal portion of the song in response to the user singing with the song as taught by Hetherington in order to enhance the experience of singing along without looking at the lyrics (Abstract; Hetherington). Claim 21 is a system claim correspond to method claim 11. Therefore, claim 21 has been analyzed and rejected based on method claim 11. Response to Amendment Applicant’s arguments with respect to claims 1-6,8,10-18 and 20-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AKELAW A TESHALE whose telephone number is (571)270-5302. The examiner can normally be reached 9 am -6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FAN TSANG can be reached at (571) 272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. AKELAW TESHALE Primary Examiner Art Unit 2694 /AKELAW TESHALE/Primary Examiner, Art Unit 2694
Read full office action

Prosecution Timeline

Jul 07, 2023
Application Filed
May 12, 2025
Non-Final Rejection — §103
Aug 15, 2025
Response Filed
Nov 06, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598261
WIDEBAND DOUBLETALK DETECTION FOR OPTIMIZATION OF ACOUSTIC ECHO CANCELLATION
2y 5m to grant Granted Apr 07, 2026
Patent 12598253
SYSTEMS AND METHODS FOR MEDIA ANALYSIS FOR CALL STATE DETECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12589700
HOLDING APPARATUS AND METHOD FOR HOLDING A MOBILE DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12574665
DATA PROCESSING METHOD, OUTDOOR UNIT, INDOOR UNIT AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12563346
FLEXIBLE ELECTRONIC DEVICE AND METHOD FOR ADJUSTING SOUND OUTPUT THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+15.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 834 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month