Prosecution Insights
Last updated: April 19, 2026
Application No. 18/264,537

Method for Operating a Digital Assistant of a Vehicle, Computer-Readable Medium, System, and Vehicle

Final Rejection §101§102
Filed
Aug 07, 2023
Examiner
BHARGAVA, ANIL K
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
BAYERISCHE MOTOREN WERKE AKTIENGESELLSCHAFT
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
447 granted / 540 resolved
+27.8% vs TC avg
Strong +29% interview lift
Without
With
+29.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
10 currently pending
Career history
550
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
51.0%
+11.0% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§101 §102
DETAILED ACTION This action is responsive to the following communication: Amendment filed 01/22/26. This action is made final. Claims 12-29 are pending in the case. Claims 26, 27 and 28 are independent claims. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice as to Grounds of Rejection and Pre-AIA or AIA Status In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 27-28 are rejected under 35 U.S.C 101 because claimed invention is directed to non-statutory subject matter. With regard to claims 27, 28, these claims recite a System carrying out a method of claim 12 and a Vehicle comprising the system of claim 27 respectively. These claims do not include any hardware component that is tied with the method of claim 12 and the System of claim 27 carrying out methos claim 12 respectively. As such, the System of claim 27 and vehicle of claim 28 is reasonably interpreted as software per se. Accordingly, the recited “a system” of Claim 27 and “a vehicle” of claim 28 is a computer software per se and is not a “process,” a “machine,” a “manufacture” or a “composition of matter,” as defined in 35 U.S.C. 101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 12-29 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wolverton et al. (U.S. Patent Application Publication 2014/0136187 A1 hereinafter Wolverton) With regard to claims 12, 26-28, Wolverton teaches a method, a computer-readable medium, a system, a Vehicle respectively, for operating a digital assistant of a vehicle, the method comprising: receiving a command to change an operating mode of the digital assistant from a first digital assistant operating mode to a second digital assistant operating mode by means of the digital assistant <fig 1 item 102, use can issue various type of input commands to switch modes of a vehicle personal (digital) assistant para 0039>; changing the operating mode of the digital assistant from the first operating mode to the second operating mode in response to the received command to change the operating mode of the digital assistant <various forms of input can be used for different modes of the digital assistant para 0039 see also where digital assistant can be in an autonomous mode by the parent para 0141>; identifying first, vehicle-specific context information for the second operating mode of the digital assistant <vehicle context can be used such as speed, location, motion para 0141, see also fig 1 items 116-122>, wherein the first, vehicle-specific context information comprises user-specific context information <vehicle-specific information that it is being driven slowly can be user-specific context that user is stuck in traffic para 0074> providing a first message comprising a first operator prompt to a user depending on the first, vehicle-specific context information identified by the digital assistant <a prompt can be provided fig 8 item 822 para 0135-0137>; receiving a first user input in response to the first operator prompt by means of the digital assistant <user can ignore/listen/view/save as shown – see fig 8 item 824 see also para 0140>; executing an operation corresponding to the first user input at least in part using the digital assistant depending on the first, vehicle-specific context information in the second operating mode <digital assistant provides answer by executing the respective query and provides results in a conversational format para 0137-0140>; and providing a second message to the user depending on the first user input and the first, vehicle-specific context information <conversation between the driver and the digital assistant is shown in fig 8 para 0135-0138>. With regard to claim 13, this claim depends upon claim 12, which is rejected above. In addition, Wolverton teaches wherein the digital assistant is configured to cause a physical system response responsive to a user input <parking brake status, yellow light (physical system response) is provided based upon user input fig 8 items 836-842>. Examiner notes that specification of the instant application does not shed any light on “physical system response”. With regard to claim 14, this claim depends upon claim 12, which is rejected above. In addition, Wolverton teaches wherein the command to change the operating mode from the first operating mode of the digital assistant to the second operating mode of the digital assistant is received by the digital assistant without user input <driver of the vehicle communicated with the digital assistant in different modes para 0034-0036, fig 8>, from a control unit based on a sensor <vehicle sensor can be used fig 2 item 208> of the vehicle sensing a condition of the vehicle <sensor determines that a user is driving very slowly para 0074>. With regard to claim 15, this claim depends upon claim 12, which is rejected above. In addition, Wolverton teaches wherein the first, vehicle-specific context information for the second operating mode of the digital assistant differs from a first, vehicle- specific context information of the first operating mode of the digital assistant <fig 8 shows contextual information among different modes is different for example touch/voice/autonomous/facial para 0140-0141>. With regard to claim 16, this claim depends upon claim 12, which is rejected above. In addition, Wolverton teaches wherein the first operator prompt includes one or more commands that the digital assistant can execute <fig 8 item 824 <user can ignore/listen/view/save as shown – see fig 8 item 824 see also para 0140, digital assistant provides answer by executing the respective query and provides results in a conversational format para 0137-0140>; or wherein the first user input includes a command of the first operator prompt, which corresponds to an operation that can be executed by the digital assistant. With regard to claim 17, this claim depends upon claim 16, which is rejected above. In addition, Wolverton teaches wherein the digital assistant is configured to cause a physical system response responsive to a user input <parking brake status, yellow light (physical system response) is provided based upon user input fig 8 items 836-842>. Examiner notes that specification of the instant application does not shed any light on “physical system response”. With regard to claim 18, this claim depends upon claim 18, which is rejected above. In addition, Wolverton teaches wherein the execution of the operation corresponding to the first user input by the digital assistant depending on the first, vehicle- specific context information in the second operating mode comprises: determining a first output channel depending on the first, vehicle-specific context information <speakers, display screen and other output devices can be determined <figs 7, 8 para 0135>; executing the operation corresponding to the first user input by means of the digital assistant <fig 8 item 824, user can ignore/listen/view/save as shown – see fig 8 item 824 see also para 0140, digital assistant provides answer by executing the respective query and provides results in a conversational format para 0137-0140>; and providing first output information to the user of the vehicle via the first output channel depending on the first user input while the operation corresponding to the first user input is carried out by the digital assistant <fig 8 items 836-842, conversational dialog-based digital owner’s manual para 0140>. With regard to claim 19, this claim depends upon claim 18, which is rejected above. In addition, Wolverton teaches wherein the execution of the operation corresponding to the first user input by the digital assistant depending on the first, vehicle- specific context information in the second operating mode also comprises: determining a second output channel based on the first, vehicle-specific context information <presentation style is determined to present response/notification para 0042, 0122>; and providing second output information to the user of the vehicle via the first output channel depending on the first user input to while the operation corresponding to the first user input is carried out by the digital assistant <response/notification, spoken natural language is output para 0042, 0122>. With regard to claim 20, this claim depends upon claim 18, which is rejected above. In addition, Wolverton teaches the method further comprising: receiving a further command to change the operating mode of the digital assistant from the second digital assistant operating mode to the first digital assistant operating mode by means of the digital assistant <fig 1 item 102, use can issue various type of input commands to switch modes of a vehicle personal (digital) assistant para 0039>; changing the operating mode of the digital assistant from the second operating mode to the first operating mode in response to the received further command; identifying first, vehicle-specific context information in the first operating mode, wherein the first, vehicle-specific context information in the first operating mode of the digital assistant is different from the first, vehicle-specific context information of the second operating mode of the digital assistant <various forms of input can be used for different modes of the digital assistant para 0039 see also where digital assistant can be in an autonomous mode by the parent para 0141>; receiving a further user input in the first operating mode from the user by the digital assistant in the first operating mode of the digital assistant, wherein the further user input in the first operating mode corresponds to the first user input in the second operating mode <touch and voice input can be provided in the two different modes para 0135-0141, fig 8>; and executing an operation corresponding to the further user input by means of the digital assistant depending on the first, vehicle-specific context information in the first operating mode <digital assistant provides answer by executing the respective query and provides results in a conversational format para 0137-0140>. With regard to claim 21, this claim depends upon claim 20, which is rejected above. In addition, Wolverton teaches wherein the execution of the operation corresponding to the further user input by the digital assistant depending on the first, vehicle- specific context information in the first operating mode comprises: determining a further first output channel depending on the first, vehicle-specific context information in the first operating mode of the digital assistant <voice output is provided, fig 8>, wherein the further first output channel in the first operating mode is different from the first output channel of the digital assistant in the second operating mode of the digital assistant <output is provided by display, fig 8>, and wherein the further first output channel in the first operating mode is an only output channel of the digital assistant in the first operating mode <conversational mode is selected fig 8 para 0135-0138>; and providing further output information via the further first output channel <conversational mode is selected fig 8 para 0135-0138>. With regard to claim 22, this claim depends upon claim 21, which is rejected above. In addition, Wolverton teaches wherein the digital assistant is configured to cause a physical system response responsive to a user input <parking brake status, yellow light (physical system response) is provided based upon user input fig 8 items 836-842>. Examiner notes that specification of the instant application does not shed any light on “physical system response”. With regard to claim 23, this claim depends upon claim 21, which is rejected above. In addition, Wolverton teaches wherein the execution of the operation corresponding to the further user input by the digital assistant depending on the first, vehicle- specific context information in the first operating mode comprises: providing the further output information in the first operating mode to the user of the vehicle via the first output channel depending on the first user input while the operation corresponding to the further user input is carried out by the digital assistant in the first operating mode assistant <voice output is provided, fig 8>, wherein the further output information in the first operating mode is different from the first output information in the second operating mode of the digital assistant <voice and display output is provided fig 8>. With regard to claim 24, this claim depends upon claim 20, which is rejected above. In addition, Wolverton teaches wherein the execution of the operation corresponding to the further user input by the digital assistant depending on the first, vehicle- specific context information in the first operating mode comprises: providing further first output information in the first operating mode to the user of the vehicle via a further first output channel depending on the first user input while the operation corresponding to the further user input is carried out by the digital assistant in the first operating mode <voice output is provided, fig 8>, wherein the further first output information in the first operating mode is different from the first output information in the second operating mode of the digital assistant <voice and display output is provided fig 8>. With regard to claim 25, this claim depends upon claim 12, which is rejected above. In addition, Wolverton teaches the method further comprising: receiving a further command to change the operating mode of the digital assistant from the second digital assistant operating mode to the first digital assistant operating mode by means of the digital assistant <fig 1 item 102, use can issue various type of input commands to switch modes of a vehicle personal (digital) assistant para 0039>; changing the operating mode of the digital assistant from the second operating mode to the first operating mode in response to the received further command <various forms of input can be used for different modes of the digital assistant para 0039 see also where digital assistant can be in an autonomous mode by the parent para 0141>; identifying first, vehicle-specific context information in the first operating mode, wherein the first, vehicle-specific context information in the first operating mode of the digital assistant is different from the first, vehicle-specific context information of the second operating mode of the digital assistant <driver of the vehicle communicated with the digital assistant in different modes para 0034-0036, fig 8, vehicle context can be used such as speed, location, motion para 0141, see also fig 1 items 116-122>; receiving a further user input in the first operating mode from the user by the digital assistant in the first operating mode of the digital assistant, wherein the further user input in the first operating mode corresponds to the first user input in the second operating mode <touch and voice input can be provided in the two different modes para 0135-0141, fig 8>; and executing an operation corresponding to the further user input by means of the digital assistant depending on the first, vehicle-specific context information in the first operating mode <digital assistant provides answer by executing the respective query and provides results in a conversational format para 0137-0140>. With regard to claim 29, this claim depends upon claim 12, which is rejected above. In addition, Wolverton teaches wherein the first, vehicle-specific context information comprises previous experience by the user with a function of a vehicle <previous experience of the user with the function of the vehicle such as “parking brakes” can be used para 0078>. Response to Arguments Applicant's remarks filed on 01/22/26 have been considered but are not persuasive. 101 rejection is maintained, applicant has overcome the rejection of claim 26. Claim 27 depends on claim 12 and claim 28 depends upon claim 27. Applicant on page 11, section V has incorrectly stated that claims 27 and 28 depend upon claims 26. Regarding the previous rejection of the claim 12 under 35 USC 102(a)(1), Applicant argues on pages 11-13 (section VI (A and B)) that Wolverton does not teach “the first, vehicle-specific context information comprises user-specific context information” Examiner has provided the citation for this added limitation. In addition, Applicant argues that Wolverton does not teach or suggest “providing a first message comprising a first operator prompt to a user depending on the first, vehicle-specific context information identified by the digital assistant” The Office respectfully disagrees, as stated above in the office rejection above, fig 8, item 822 shows a prompt that can provide context-aware information as determined by the context analyzer that one pf the lights on the vehicle indicator panel is turned on see para 0137. Furthermore, it should be noted that applicant’s arguments attempt to narrow the broadest reasonable interpretation of the claimed language, the examiner cannot make narrow interpretation per applicant’s remarks. Applicant is invited to amend the claims to reflect the distinctions pointed out in the remarks and to ensure that claims are interpreted in such intended fashion. Applicant further argues on page 14, section VII (A) that claim 14 is allowable as amended, Examiner has provided citation for the claim as amended. Therefore, the reference Wolverton has been reasonably interpreted as teaching the recited claim language. Applicant further argues on pages 14 section VII that claims 13-28 are allowable for the reasons argued above for claim 12. The Office respectfully disagrees, and counter-asserts the rationale set forth above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANIL K BHARGAVA whose telephone number is (571)270-3278. The examiner can normally be reached Monday - Friday 8:30 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at 571-272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANIL K BHARGAVA/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Aug 07, 2023
Application Filed
Oct 18, 2025
Non-Final Rejection — §101, §102
Jan 22, 2026
Response Filed
Mar 06, 2026
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602142
USER INTERFACES FOR SHARING CONTEXTUALLY RELEVANT MEDIA CONTENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602307
ROOT CAUSE DETECTION OF STRUGGLE EVENTS WITH DIGITAL EXPERIENCES AND RESPONSES THERETO
2y 5m to grant Granted Apr 14, 2026
Patent 12596776
USER INTERFACES FOR MANAGING SECURE OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12597187
SYSTEMS AND METHODS FOR GENERATING CONTENT CONTAINING AUTOMATICALLY SYNCHRONIZED VIDEO, AUDIO, AND TEXT
2y 5m to grant Granted Apr 07, 2026
Patent 12590809
INFORMATION PROVIDING DEVICE, INFORMATION PROVIDING METHOD, AND INFORMATION PROVIDING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+29.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month