Prosecution Insights
Last updated: April 19, 2026
Application No. 18/376,592

Audio User Interface

Non-Final OA §103
Filed
Oct 04, 2023
Examiner
TWEEL JR, JOHN ALEXANDER
Art Unit
2689
Tech Center
2600 — Communications
Assignee
BOSE CORPORATION
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
1191 granted / 1441 resolved
+20.7% vs TC avg
Moderate +10% lift
Without
With
+10.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
19 currently pending
Career history
1460
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
41.8%
+1.8% vs TC avg
§102
20.1%
-19.9% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1441 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the controller must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: Page 6, Line 17: It appears there is some verb or grammar missing in the phrase “initiation of the feature manually by the user”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-13, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Vautin et al [U.S. 11,985,495] in view of Furumoto et al [US 2020/0272325]. For claim 1, the apparatus for controlling an audio system in a vehicle (Title: Audio Control in Vehicle Cabin) taught by Vautin includes the following claimed subject matter, as noted, 1) the claimed display is met by the interface (No. 140; Col. 11, Lns. 46-47), 2) the claimed input sensor is met by the touch screen (No. 410; Col. 7, Lns. 25-26) coupled to the display, and 3) the claimed controller is met by the control system (No. 130) coupled to the display (Fig. 1), the input sensor, and audio system configured to present a display including one or more audio setting options for at least one of a first individual or group of occupant locations in the vehicle (Col. 7, Lns. 29-32: Interface command controls 420A and 420B include audio output controls for distinct locations…in the cabin) to be distinct relative to a second individual or group of occupant locations in the vehicle (I.e., control 420B for the passenger seat as opposed to control 420A for the driver’s seat seen in Figure 4), the controller also configured to receive user selections via the input sensor and to control the audio system in accord with the user selections (Col. 8, Lns. 65-67: different volume levels (e.g., volume level (a) to location 204 and volume level (b) to location 206)). However, although Figure 4 of Vautin displays two separate areas on the screen for the two separate volume controls, these cannot be clearly called “sub-panels” on the display. Separating two areas of a display into separate input areas is not new in the prior art. The input control device and method taught by Furumoto acquires pieces of information in order to indicate multiple split areas into which the screen of a display equipped with a touch sensor is split (Abstract), and attribution information for each of the multiple split areas. As seen in Figures 10 and 11, the touchscreen may be split into separate areas depending on passengers in the vehicle, such as the driver and other passengers. Based on whichever area is manipulated, an action can be performed for that area of the vehicle such as changing the air conditioner temperature or changing the volume of the audio (Figs. 15 and 16). The obvious advantage of the Furumoto reference is that it can use a single operation device in which to perform multiple actions (Paragraph 8). And the Vautin reference requires at least two separate areas in which to enact its volume controls. The Furumoto teaches one obvious method into which these volume controls may be separated into distinct areas or “sub-panels” that can be used by the Vautin reference. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to present the information of Vautin in separate “sub-panels” like that of Furumoto for the purpose of using a single display device for multiple users. For claim 2, Figure 4 of Vautin depicts the second control (No. 420B) that does not include control for the driver’s seat. For claim 3, the command controls (Nos. 420A and 420B) of Vautin include volume controls for audio output in locations (No. 204 and 206). For claim 4, the Furumoto reference also contains an occupant detection sensor (No. 44) that outputs a result of the occupant detection to the area splitting unit (Paragraph 77). For claims 5 and 6, Figure 4 of Vautin presents audio setting information and options (Nos. 420A and 420B) for two separate occupant locations, in this case volume levels and controls for the driver’s seat and passenger seat. Also, Figure 16 of Furumoto displays separate volume actions depending on location of zone to be adjusted. For claim 8, the method of controlling an audio system in a vehicle (Title: Audio Control in Vehicle Cabin) taught by Vautin includes the following claimed steps, as noted, 1) the claimed displaying a display is achieved using the interface (No. 140; Col. 11, Lns. 46-47) including one or more audio setting options for at least one of a first individual or group of occupant locations in the vehicle (Col. 7, Lns. 29-32: Interface command controls 420A and 420B include audio output controls for distinct locations…in the cabin) to be distinct relative to a second individual or group of occupant locations in the vehicle (I.e., control 420B for the passenger seat as opposed to control 420A for the driver’s seat seen in Figure 4), 2) the claimed receiving user selections via an input sensor is achieved using the touch screen (No. 410; Col. 7, Lns. 25-26) coupled to the display, and 3) the claimed controlling the audio system in accord with the user selections is achieved using the control system (No. 130) coupled to the display (Fig. 1). However, although Figure 4 of Vautin displays two separate areas on the screen for the two separate volume controls, these cannot be clearly called “sub-panels” on the display. The claim is interpreted and rejected for the same reasons and rationale as is mentioned in the rejection of claim 1 above. For claim 9, Figure 4 of Vautin depicts the second control (No. 420B) that does not include control for the driver’s seat. For claim 10, the command controls (Nos. 420A and 420B) of Vautin include volume controls for audio output in locations (No. 204 and 206). For claim 11, the Furumoto reference also contains an occupant detection sensor (No. 44) that outputs a result of the occupant detection to the area splitting unit (Paragraph 77). For claims 12 and 13, Figure 4 of Vautin presents audio setting information and options (Nos. 420A and 420B) for two separate occupant locations, in this case volume levels and controls for the driver’s seat and passenger seat. Also, Figure 16 of Furumoto displays separate volume actions depending on location of zone to be adjusted. For claim 15, the non-transitory computer readable medium taught by Vautin (Col. 13, Lns. 52-53: computer program product) having instructions that, when executed by a processor (Col. 13, Lns. 56-57: a programmable processor) to perform a method of controlling an audio system in a vehicle (Title: Audio Control in Vehicle Cabin) includes the following claimed steps, as noted, 1) the claimed displaying a display is achieved using the interface (No. 140; Col. 11, Lns. 46-47) including one or more audio setting options for at least one of a first individual or group of occupant locations in the vehicle (Col. 7, Lns. 29-32: Interface command controls 420A and 420B include audio output controls for distinct locations…in the cabin) to be distinct relative to a second individual or group of occupant locations in the vehicle (I.e., control 420B for the passenger seat as opposed to control 420A for the driver’s seat seen in Figure 4), 2) the claimed receiving user selections via an input sensor is achieved using the touch screen (No. 410; Col. 7, Lns. 25-26) coupled to the display, and 3) the claimed controlling the audio system in accord with the user selections is achieved using the control system (No. 130) coupled to the display (Fig. 1). However, although Figure 4 of Vautin displays two separate areas on the screen for the two separate volume controls, these cannot be clearly called “sub-panels” on the display. The claim is interpreted and rejected for the same reasons and rationale as is mentioned in the rejection of claim 1 above. For claim 16, Figure 4 of Vautin depicts the second control (No. 420B) that does not include control for the driver’s seat. For claim 17, the command controls (Nos. 420A and 420B) of Vautin include volume controls for audio output in locations (No. 204 and 206). For claim 18, the Furumoto reference also contains an occupant detection sensor (No. 44) that outputs a result of the occupant detection to the area splitting unit (Paragraph 77). For claim 19, Figure 4 of Vautin presents audio setting information and options (Nos. 420A and 420B) for two separate occupant locations, in this case volume levels and controls for the driver’s seat and passenger seat. Also, Figure 16 of Furumoto displays separate volume actions depending on location of zone to be adjusted. Claims 1-6, 8-13, 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over MacNeille et al [U.S. 9,773,495] in view of Furumoto et al. For claim 1, the apparatus for controlling an audio system in a vehicle (Title: System for Personalized Sound Isolation in vehicle) taught by MacNeille includes the following claimed subject matter, as noted, 1) the claimed display is met by the vehicle display (No. 30), 2) the claimed input sensor is met by the touch screen (Col. 14, Ln. 12) to accept user selections coupled to the user interface (No. 32) presented on said display, and 3) the claimed controller is met by the control units (No. 218) of the human-machine interface (No. 210) that is coupled to the display and the input sensor (Fig. 5) to present a display including one or more audio setting options for at least one of a first individual or group of occupant locations in the vehicle (Col. 7, Lns. 46-48: a user interface 32a can be configured to enable user control of the audio settings for Zone A; see Figs. 1 and 3, Zone A) to be distinct relative to a second individual or group of occupant locations in the vehicle (Col. 7, Lns. 48-50: a user interface 32b can be configured to enable user control of the audio settings for Zone B), the controller also configured to receive user selections via the input sensor and to control the audio system in accord with the user selections (Col. 7, Lns. 57-58: selection of a volume level for audio being played within the selected audio zone 12). However, Figure 3 of MacNeille displays controls for only one zone of the vehicle. There is no mention of at least two sub-panels on a single display. The MacNeille reference does; however, mention that all of the audio user interfaces (No. 32) either individually or as a group (Col. 14, Lns. 9-11) with all the touch inputs included therein. And, similarly to the rejection above, the Furumoto reference presents one example method to display separate audio inputs on one screen. As seen in Figures 10 and 11, the touchscreen may be split into separate areas depending on passengers in the vehicle, such as the driver and other passengers. Based on whichever area is manipulated, an action can be performed for that area of the vehicle such as changing the air conditioner temperature or changing the volume of the audio (Figs. 15 and 16). The obvious advantage of the Furumoto reference is that it can use a single operation device in which to perform multiple actions (Paragraph 8). And the MacNeille reference explicitly states that the audio controls may be displayed as a group. The Furumoto teaches one obvious method into which these volume controls may be separated into distinct areas or “sub-panels” that can be used by the MacNeille reference. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to present the information of MacNeille in separate “sub-panels” like that of Furumoto for the purpose of using a single display device for multiple users. For claim 2, each of the four audio zones (Nos. 12a-12d) seen in Figure 1 of MacNeille comprise occupant locations that are not included in any other occupant location. For claim 3, the user interfaces (No. 32) of MacNeille include controls for the audio source of each audio zone as well as volume levels among other audio settings (Col. 7, Lns. 55-60). For claim 4, the Furumoto reference also contains an occupant detection sensor (No. 44) that outputs a result of the occupant detection to the area splitting unit (Paragraph 77). For claims 5 and 6, Figure 3 of MacNeille depicts audio setting options and information in the form of slider controls (Nos. 36 and 38) that indicate selected values (Nos. 37 and 39) for the audio levels. Also, Figure 16 of Furumoto displays separate volume actions depending on location of zone to be adjusted. For claim 8, the method of controlling an audio system in a vehicle (Title: Method for Personalized Sound Isolation in Vehicle) taught by MacNeille includes the following claimed steps, 1) the claimed displaying a display is achieved using the vehicle display (No. 30) including one or more audio setting options for at least one of a first individual or group of occupant locations in the vehicle (Col. 7, Lns. 46-48: a user interface 32a can be configured to enable user control of the audio settings for Zone A; see Figs. 1 and 3, Zone A) to be distinct relative to a second individual or group of occupant locations in the vehicle (Col. 7, Lns. 48-50: a user interface 32b can be configured to enable user control of the audio settings for Zone B), 2) the claimed receiving user selections via an input sensor is achieved using the touch screen (Col. 14, Ln. 12) to accept user selections coupled to the user interface (No. 32) presented on said display, and 3) the claimed controlling the audio system in accord with the user selections is achieved using the control units (No. 218) of the human-machine interface (No. 210) that is coupled to the display and the input sensor (Fig. 5). However, there is no mention of at least two sub-panels on a single display. The claim is interpreted and rejected for the same reasons and rationale as is mentioned in the rejection of claim 1 above. For claim 9, each of the four audio zones (Nos. 12a-12d) seen in Figure 1 of MacNeille comprise occupant locations that are not included in any other occupant location. For claim 10, the user interfaces (No. 32) of MacNeille include controls for the audio source of each audio zone as well as volume levels among other audio settings (Col. 7, Lns. 55-60). For claim 11, the Furumoto reference also contains an occupant detection sensor (No. 44) that outputs a result of the occupant detection to the area splitting unit (Paragraph 77). For claims 12 and 13, Figure 3 of MacNeille depicts audio setting options and information in the form of slider controls (Nos. 36 and 38) that indicate selected values (Nos. 37 and 39) for the audio levels. Also, Figure 16 of Furumoto displays separate volume actions depending on location of zone to be adjusted. For claim 15, the non-transitory computer readable medium having instructions that, when executed by a processor (Col. 12, Lns. 19-22: program modules or software instructions stored in a data storage…and executed by a data processor), cause the processor to perform the following method, as noted, 1) the claimed displaying a display is achieved using the vehicle display (No. 30) including one or more audio setting options for at least one of a first individual or group of occupant locations in the vehicle (Col. 7, Lns. 46-48: a user interface 32a can be configured to enable user control of the audio settings for Zone A; see Figs. 1 and 3, Zone A) to be distinct relative to a second individual or group of occupant locations in the vehicle (Col. 7, Lns. 48-50: a user interface 32b can be configured to enable user control of the audio settings for Zone B), 2) the claimed receiving user selections via an input sensor is achieved using the touch screen (Col. 14, Ln. 12) to accept user selections coupled to the user interface (No. 32) presented on said display, and 3) the claimed controlling the audio system in accord with the user selections is achieved using the control units (No. 218) of the human-machine interface (No. 210) that is coupled to the display and the input sensor (Fig. 5). However, there is no mention of at least two sub-panels on a single display. The claim is interpreted and rejected for the same reasons and rationale as is mentioned in the rejection of claim 1 above. For claim 16, each of the four audio zones (Nos. 12a-12d) seen in Figure 1 of MacNeille comprise occupant locations that are not included in any other occupant location. For claim 17, the user interfaces (No. 32) of MacNeille include controls for the audio source of each audio zone as well as volume levels among other audio settings (Col. 7, Lns. 55-60). For claim 18, the Furumoto reference also contains an occupant detection sensor (No. 44) that outputs a result of the occupant detection to the area splitting unit (Paragraph 77). For claim 19, Figure 3 of MacNeille depicts audio setting options and information in the form of slider controls (Nos. 36 and 38) that indicate selected values (Nos. 37 and 39) for the audio levels. Also, Figure 16 of Furumoto displays separate volume actions depending on location of zone to be adjusted. Claims 7, 14, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art does not fully teach or suggest expanding or contracting at least one of the two sub-panels to occupy more or less of the display in response to a user interacting with the at least one of the two sub-panels via the input sensor in conjunction with the subject matter found in each independent claim. This is considered unobvious subject matter. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Forrester et al [US 2024/0408965] presents a visual display having sub-panels. Winton et al [US 2025/0220352] determines output configuration based on selection of the passengers. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN A. TWEEL JR whose telephone number is (571)272-2969. The examiner can normally be reached M-F 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Davetta W Goins can be reached at 571-272-2957. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JAT 2/6/2026 /JOHN A TWEEL JR/Primary Examiner, Art Unit 2689
Read full office action

Prosecution Timeline

Oct 04, 2023
Application Filed
Feb 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602970
PERSONAL SIGNALING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12593824
LIVESTOCK MANAGEMENT SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12597323
METHOD, SYSTEM AND APPARATUS FOR CONTROLLING SECURITY SIRENS OF A SECURITY SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12595074
ON-BOARD LUGGAGE SPACE AVAILABILITY INDICATORS
2y 5m to grant Granted Apr 07, 2026
Patent 12592135
SYSTEM, DEVICE, AND METHOD FOR SMOKE DISCRIMINATION AND IDENTIFICATION OF FIRE SOURCE
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
93%
With Interview (+10.0%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1441 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month