Prosecution Insights
Last updated: April 19, 2026
Application No. 18/595,024

SYNCHRONOUS BINAURAL USER CONTROLS FOR HEARING INSTRUMENTS

Final Rejection §102§103
Filed
Mar 04, 2024
Examiner
BRINEY III, WALTER F
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Starkey Laboratories, Inc.
OA Round
2 (Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
69%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
352 granted / 540 resolved
+3.2% vs TC avg
Minimal +4% lift
Without
With
+3.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
58 currently pending
Career history
598
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
63.2%
+23.2% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§102 §103
Detailed Action The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . See 35 U.S.C. § 100 (note). Art Rejections Anticipation The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3, 4, 6–17, 19 and 20 are rejected under 35 U.S.C. § 102(a)(1), (2) as being anticipated by US Patent Application Publication 2022/0159389 (published 19 May 2022) (“Lawrence”). Claim 1 is drawn to “a system.” The following table illustrates the correspondence between the claimed system and the Lawrence reference. Claim 1 The Lawrence Reference “1. A system comprising: “a first hearing instrument and a second hearing instrument, The Lawrence reference describes a corresponding system 100 comprising two hearing devices 110, 120, one of the two being worn in each of a user’s ears. Lawrence at Abs., ¶¶ 46–68, FIGs.1, 2. “wherein the first and second hearing instruments are communicatively coupled; and Lawrence’s hearing devices 110, 120 are communicatively linked by communication link 116. Id. at ¶ 51, FIG.1. “a processing system included in one or more of the first or second hearing instruments, the processing system including one or more processors implemented in circuitry, Hearing device 110 includes a processor 112 and hearing device 120 includes a processor 122. Id. at ¶ 47, FIG.1. One of ordinary skill would have understood that processors 112 and 122 are implemented in circuitry since they execute the instructions stored in memories 113, 123. See id. at ¶ 53. “wherein the processing system is configured to: “obtain, from at least one of a first physical button or a first touch responsive surface “obtain, from at least one of a second physical button or a second touch responsive surface Hearing device 110 includes a displacement sensor 119 and hearing device 120 includes a displacement sensor 129. Id. at ¶¶ 49–52, 80–89, FIGs.1, 7A, 7B, 8A, 8B. Lawrence describes sensors 119 and 129 as displacement sensors that provide inertial data or light data, like an accelerometer, gyroscope, a light sensor or camera. Id. at ¶ 49. The use of an inertial sensor effectively turns the outer surface of hearing device 110 into a touch-responsive surface by converting a user’s touch of the outer surface into a displacement measurement that is proportional to the direction and magnitude with which a user touches the outer surface. Id. at ¶¶ 49, 77, 81. Additionally, Lawrence describes providing environmental sensors, such as capacitive and resistive sensors that detect a user touching hearing device 110 to confirm gestures made on the hearing device. Id. at ¶ 59. Each hearing device’s processor obtains displacement data, or user input, through their respective displacement sensor. Id. at ¶¶ 49–52, 80–89, FIGs.1, 7A, 7B, 8A, 8B. For example, a user may translate or rotate a hearing device. Id. The displacement sensors will detect the translation or rotation and provide a measure of the user input to a corresponding processor. Id. “identify a command based on the first input data and the second input data; and Processors 112, 122 then communicate the displacement data to each other and process the data to detect a gesture. Id. at ¶¶ 55, 90–113, FIGs.9–13. “execute the command.” After detecting a gesture, hearing devices 110, 120 controls the functioning of hearing devices 110, 120 based on the identified gesture. Id. at ¶¶ 22, 41, FIG.13 (block 608). Table 1 For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 3 depends on claim 1 and further requires the following: “wherein the command is a first command and “the processing system is further configured to: “obtain third input data from the at least one of the first physical button or the first touch responsive surface within the first hearing instrument; “determine that the user ceased interacting with the first hearing instrument before obtaining fourth input data from the at least one of the second physical button or the second touch responsive surface within the second hearing instrument; “identify, based on the determination that the user ceased interacting with the first hearing instrument before obtaining the fourth input data, a second command; and “execute the second command.” Similarly, Lawrence describes discriminating patterns that are based on displacement data from a single one of hearing devices 110, 120 and patterns that are based on displacement data from both of hearing devices 110 and 120. Lawrence at ¶¶ 90–107, FIGs.9–11. For example, a user may tap hearing device 110, producing a displacement amplitude along a single axis. Id. at ¶¶ 108–112, FIG.12. The displacement data is compared to displacement data from hearing device 120 to determine a variation measurement. Id. at ¶ 113, FIG.13. The processors of the hearing devices then determine if a user interacted with hearing device 110, hearing device 120 or both. Id. In this way, the processors determine a second command and execute the second command. See id. at ¶ 113, FIG.13. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 4 depends on claim 1 and further requires the following: “wherein the processing system is further configured to determine whether a predetermined period of time has elapsed before identifying the command.” Likewise, Lawrence uses time in several manners to evaluate displacement data and to detect tapping, holding and swiping-based gestures. Lawrence at ¶¶ 91, 93, 99, 103, 111, 112, FIG.12. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 6 depends on claim 1 and further requires the following: “wherein the command is a first command and “the processing system is further configured to: “determine that only one of third input data or fourth input data has been obtained from the first and second hearing instruments; “identify a non-synchronous command based on which of the third input data or the fourth input data has been obtained, “wherein the non-synchronous command is a second command where input data from only one of the first hearing instrument and the second hearing instrument has been obtained; and “execute the non-synchronous command.” Similarly, Lawrence describes discriminating patterns that are based on displacement data from a single one of hearing devices 110, 120 and patterns that are based on displacement data from both of hearing devices 110 and 120. Lawrence at ¶¶ 90–107, FIGs.9–11. For example, a user may tap hearing device 110, producing a displacement amplitude along a single axis. Id. at ¶¶ 108–112, FIG.12. The displacement data is compared to displacement data from hearing device 120 to determine a variation measurement. Id. at ¶ 113, FIG.13. The processors of the hearing devices then determine if a user interacted with hearing device 110, hearing device 120 or both. Id. In this way, the processors will determine and execute a second command from displacement data at hearing device 110 and execute the second command without receiving a synchronous user input from hearing device 120. See id. at ¶ 113, FIG.13. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 7 depends on claim 1 and further requires the following: “wherein the first input data and the second input data are respectively consistent with the user pressing twice on the first hearing instrument and pressing twice on the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized double-tapping on hearing devices 110 and 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern) Alternatively, it would have been obvious to combine double tapping at both of Lawrence’s hearing devices as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 8 depends on claim 1 and further requires the following: “wherein the first input data and the second input data are respectively consistent with the user pressing and holding down on the first hearing instrument and the user tapping the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized tapping on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine tapping and long pressing as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 9 depends on claim 1 and further requires the following: “wherein at least one of the first hearing instrument and the second hearing instrument comprise one or more sensors and “wherein at least one of the first input data and the second input data comprise data consistent with the user tilting their head.” Lawrence describes hearing device 110 as including a displacement sensor 119 and hearing device 120 as including a displacement sensor 129. Id. at ¶¶ 49–52, 80–89, FIGs.1, 7A, 7B, 8A, 8B. Lawrence describes sensors 119 and 129 as displacement sensors that provide inertial data or light data, like an accelerometer, gyroscope, a light sensor or camera. Id. at ¶ 49. Sensors 119 and 129 generate displacement data consistent with a user head tilt since both sets of sensors are operable to generate rotational displacement data 328, 338 from accelerometers and gyroscopes. Id. at ¶¶ 81, 82, FIGs.8A, 8B. In other words, because sensors 119 and 129 include accelerometers and/or gyroscopes, they will produce data that reflects when a user tilts his head. Id. at ¶¶ 35, 82, 83, 109. Lawrence filters this data to distinguish head tilting from single-sided actuation by contact with a surface of hearing devices 110, 120. Id. at ¶ 35. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 10 depends on claim 1 and further requires the following: “wherein the first input data and the second input data respectively comprise input data consistent with the user pressing and holding down on the first hearing instrument and the user pressing and holding down on the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized pressing on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine long pressing at both hearing devices as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 11 is drawn to “a method.” The following table illustrates the correspondence between the claimed method and the Lawrence reference. Claim 11 The Lawrence Reference “11. A method comprising: The Lawrence reference describes a corresponding method for operating system 100 comprising two hearing devices 110, 120, one of the two being worn in each of a user’s ears. Lawrence at Abs., ¶¶ 46–68, FIGs.1, 2. Lawrence’s hearing devices 110, 120 are communicatively linked by communication link 116. Id. at ¶ 51, FIG.1. Hearing device 110 includes a processor 112 and hearing device 120 includes a processor 122. Id. at ¶ 47, FIG.1. One of ordinary skill would have understood that processors 112 and 122 are implemented in circuitry since they execute the instructions stored in memories 113, 123. See id. at ¶ 53. “obtaining, by a processing system, from at least one of a first physical button or first touch responsive surface “obtaining, by the processing system, from at least one of a second physical button or second touch responsive surface Hearing device 110 includes a displacement sensor 119 and hearing device 120 includes a displacement sensor 129. Id. at ¶¶ 49–52, 80–89, FIGs.1, 7A, 7B, 8A, 8B. Lawrence describes sensors 119 and 129 as displacement sensors that provide inertial data or light data, like an accelerometer, gyroscope, a light sensor or camera. Id. at ¶ 49. The use of an inertial sensor effectively turns the outer surface of hearing device 110 into a touch-responsive surface by converting a user’s touch of the outer surface into a displacement measurement that is proportional to the direction and magnitude with which a user touches the outer surface. Id. at ¶¶ 49, 77, 81. Additionally, Lawrence describes providing environmental sensors, such as capacitive and resistive sensors that detect a user touching hearing device 110 to confirm gestures made on the hearing device. Id. at ¶ 59. Each hearing device’s processor obtains displacement data, or user input, through their respective displacement sensor. Id. at ¶¶ 49–52, 80–89, FIGs.1, 7A, 7B, 8A, 8B. For example, a user may translate or rotate a hearing device. Id. The displacement sensors will detect the translation or rotation and provide a measure of the user input to a corresponding processor. Id. “identifying, by the processing system, a command based on the first input data and the second input data; Processors 112, 122 then communicate the displacement data to each other and process the data to detect a gesture. Id. at ¶¶ 55, 90–113, FIGs.9–13. “executing, by the processing system, the command.” After detecting a gesture, hearing devices 110, 120 controls the functioning of hearing devices 110, 120 based on the identified gesture. Id. at ¶¶ 22, 41, FIG.13 (block 608). Table 2 For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 12 depends on claim 11 and further requires the following: “wherein: “the first input data represents the user pressing and holding the first touch responsive surface of the first hearing instrument and the second input data represents the user tapping the second touch responsive surface of the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized tapping on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine tapping and long pressing as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 13 depends on claim 11 and further requires the following: “wherein the command is a first command, “the method further comprising: “obtaining, by the processing system, third input data from the at least one of a first physical button or first touch responsive surface “determining, by the processing system, that the user has ceased interacting with the first hearing instrument before obtaining fourth input data from the at least one of a second physical button or second touch responsive surface “identifying, by the processing system, a second command; and “executing, by the first hearing instrument and the second hearing instrument, the second command.” Similarly, Lawrence describes discriminating patterns that are based on displacement data from a single one of hearing devices 110, 120 and patterns that are based on displacement data from both of hearing devices 110 and 120. Lawrence at ¶¶ 90–107, FIGs.9–11. For example, a user may tap hearing device 110, producing a displacement amplitude along a single axis. Id. at ¶¶ 108–112, FIG.12. The displacement data is compared to displacement data from hearing device 120 to determine a variation measurement. Id. at ¶ 113, FIG.13. The processors of the hearing devices then determine if a user interacted with hearing device 110, hearing device 120 or both. Id. In this way, the processors determine a second command and execute the second command. See id. at ¶ 113, FIG.13. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 14 depends on claim 11 and further requires the following: “wherein identifying the command comprises waiting, by the processing system, a period of time to determine whether the first input data and the second input data have been received by the first hearing instrument and the second hearing instrument, respectively.” Likewise, Lawrence uses time in several manners to evaluate displacement data and to detect tapping, holding and swiping-based gestures. Lawrence at ¶¶ 91, 93, 99, 103, 111, 112, FIG.12. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 15 depends on claim 11 and further requires the following: “wherein the first input data is consistent with the user pressing and holding the first hearing instrument and the second input data is consistent with the user tapping the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized tapping on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine tapping and long pressing as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 16 depends on claim 11 and further requires the following: “wherein the first input data is consistent with the user pressing and holding the first hearing instrument and the second input data is consistent with the user pressing and holding the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized pressing on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine long pressing at both hearing devices as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 17 depends on claim 11 and further requires the following: “further comprising determining, by the processing system, whether a predetermined period of time has elapsed before identifying the command during which the first hearing instrument and the second hearing instrument wait before determining whether the command was given by the user.” Likewise, Lawrence uses time in several manners to evaluate displacement data and to detect tapping, holding and swiping-based gestures. Lawrence at ¶¶ 91, 93, 99, 103, 111, 112, FIG.12. For example, to detect double tapping at both hearing devices 110, 120, each hearing device will wait a period of time to determine if a single tap is input or if a double tap is input. See id. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 19 depends on claim 11 and further requires the following: “further comprising: determining, by the processing system, that only one of third input data or fourth input data has been obtained from the first and second hearing instruments; “identifying, by the processing system, a non-synchronous command based on which of the third input data and the fourth input data that has been obtained, “wherein the non-synchronous command is a second command where input from only one of the first hearing instrument and the second hearing instrument has been obtained; and “executing, by the processing system, the non-synchronous command.” Similarly, Lawrence describes discriminating patterns that are based on displacement data from a single one of hearing devices 110, 120 and patterns that are based on displacement data from both of hearing devices 110 and 120. Lawrence at ¶¶ 90–107, FIGs.9–11. For example, a user may tap hearing device 110, producing a displacement amplitude along a single axis. Id. at ¶¶ 108–112, FIG.12. The displacement data is compared to displacement data from hearing device 120 to determine a variation measurement. Id. at ¶ 113, FIG.13. The processors of the hearing devices then determine if a user interacted with hearing device 110, hearing device 120 or both. Id. In this way, the processors will determine and execute a second command from displacement data at hearing device 110 and execute the second command without receiving a synchronous user input from hearing device 120. See id. at ¶ 113, FIG.13. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 20 is drawn to “a non-transitory computer-readable medium.” The following table illustrates the correspondence between the claimed medium and the Lawrence reference. Claim 20 The Lawrence Reference “20. A non-transitory computer-readable medium, configured to cause one or more processors to: The Lawrence reference describes a corresponding system 100 comprising two hearing devices 110, 120, one of the two being worn in each of a user’s ears. Lawrence at Abs., ¶¶ 46–68, FIGs.1, 2. Lawrence’s hearing devices 110, 120 are communicatively linked by communication link 116. Id. at ¶ 51, FIG.1. Hearing device 110 includes a processor 112 and hearing device 120 includes a processor 122. Id. at ¶ 47, FIG.1. One of ordinary skill would have understood that processors 112 and 122 are implemented in circuitry since they execute the instructions stored in memories 113, 123. See id. at ¶ 53. “obtain, from at least one of a first physical button or first touch responsive surface “obtain, from at least one of a second physical button or second touch responsive surface Hearing device 110 includes a displacement sensor 119 and hearing device 120 includes a displacement sensor 129. Id. at ¶¶ 49–52, 80–89, FIGs.1, 7A, 7B, 8A, 8B. Lawrence describes sensors 119 and 129 as displacement sensors that provide inertial data or light data, like an accelerometer, gyroscope, a light sensor or camera. Id. at ¶ 49. The use of an inertial sensor effectively turns the outer surface of hearing device 110 into a touch-responsive surface by converting a user’s touch of the outer surface into a displacement measurement that is proportional to the direction and magnitude with which a user touches the outer surface. Id. at ¶¶ 49, 77, 81. Additionally, Lawrence describes providing environmental sensors, such as capacitive and resistive sensors that detect a user touching hearing device 110 to confirm gestures made on the hearing device. Id. at ¶ 59. Each hearing device’s processor obtains displacement data, or user input, through their respective displacement sensor. Id. at ¶¶ 49–52, 80–89, FIGs.1, 7A, 7B, 8A, 8B. For example, a user may translate or rotate a hearing device. Id. The displacement sensors will detect the translation or rotation and provide a measure of the user input to a corresponding processor. Id. “identify, a command based on the first input data and the second input data; and Processors 112, 122 then communicate the displacement data to each other and process the data to detect a gesture. Id. at ¶¶ 55, 90–113, FIGs.9–13. “execute the command.” After detecting a gesture, hearing devices 110, 120 controls the functioning of hearing devices 110, 120 based on the identified gesture. Id. at ¶¶ 22, 41, FIG.13 (block 608). Table 3 For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Obviousness The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 7, 8, 10, 12, 1 5, 16 and 18 are rejected under 35 U.S.C. § 103 as being unpatentable over Lawrence.1 Claim 21 is rejected under 35 U.S.C. § 103 as being unpatentable over the combination of Lawrence and US Patent Application Publication 2011/0217967 (published 08 September 2011) (“Cohen”). Claim 5 depends on claim 4 and further requires the following: “wherein: the command is a first command, “the first input data and the second input data are data regarding previous synchronous input, and “the processing system is further configured to: modify, based on the data regarding previous synchronous input, the predetermined period of time; “obtain, from the at least one of the first physical button or the first touch responsive surface within the first hearing instrument, third input data from the user; “determine that the modified predetermined period of time has elapsed after obtaining the third input data and without obtaining fourth input data from the at least one of the second physical button or the second touch responsive surface within the second hearing instrument; and “based on the determination that the modified predetermined period of time has elapsed after obtaining the third input data and without obtaining the fourth input data from the at least one of the second physical button or the second touch responsive surface within the second hearing instrument, execute a second command.” Lawrence describes storing a set of patterns for a set of recognized gestures. Lawrence further describes that a gesture may be a sequential gesture defined by a sequence of sub-gestures and a maximum period within which each sequential sub-gesture must be made. Thus, when Lawrence’s hearing devices detect a first sub-gesture of a sequential gesture, Lawrence sets/modifies the maximum period used to detect the next sub-gesture in the detected sequential gesture. Accordingly, Lawrence reasonably suggests that two sequential gestures may exist with overlapping sequences, but that differ in at least one degree distinguishable by an additional gesture sequence. For example, a first sequence may follow the pattern tap, press, tap (TPT) while a second sequence may follow the pattern tap, press, tap, tap (TPTT). One of ordinary skill would have immediately recognized this possibility from the common use of overloaded function signatures in programming languages, where a namespace includes multiple identically-named functions that are nevertheless distinguishable based on the number of arguments passed to each. For the foregoing reasons, the Lawrence reference makes obvious all limitations of the claim. Claim 7 depends on claim 1 and further requires the following: “wherein the first input data and the second input data are respectively consistent with the user pressing twice on the first hearing instrument and pressing twice on the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized double-tapping on hearing devices 110 and 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern) Alternatively, it would have been obvious to combine double tapping at both of Lawrence’s hearing devices as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 8 depends on claim 1 and further requires the following: “wherein the first input data and the second input data are respectively consistent with the user pressing and holding down on the first hearing instrument and the user tapping the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized tapping on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine tapping and long pressing as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 10 depends on claim 1 and further requires the following: “wherein the first input data and the second input data respectively comprise input data consistent with the user pressing and holding down on the first hearing instrument and the user pressing and holding down on the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized pressing on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine long pressing at both hearing devices as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 21 depends on claim 1 and further requires the following: “wherein: the processing system is further configured to[,] based on obtaining the first input data, provide an indication of a menu of modes to the user, “to obtain the second input data, the processing system is further configured to obtain the second input data while the first hearing instrument obtains the first input data from the user, and “to identify the command, the processing system is further configured to identify the command as corresponding to a selected mode from the menu of modes.” Lawrence describes obtaining displacement data 211 and 212 simultaneously from hearing instruments 110 and 120, respectively. Lawrence, however, does not describe providing an indication of a menu of modes to the user after obtaining displacement data 211. Lawrence also does not identify a command based on a selected mode. The Cohen reference, however, teaches a voice menu for use in a variety of ear-level devices, including hearing aids and the like, such as Lawrence’s hearing instruments. Cohen at ¶¶ 25, 27, FIG.1. Cohen teaches using the voice menu to select functions or a desired mode (e.g., phone mode or an environmental mode that executes hearing aid functions, or commands) without requiring a complicated sequence of manual inputs. Id. at ¶¶ 4–8, 25, 26, 49–62. A user simply presses a button on the device to activate the voice menu. Id. at ¶ 8. The voice menu function then sequentially reads out a list of options. Id. at ¶¶ 8, 42–46, FIG.3. The user selects a preferred function/mode by pressing a button during a time interval in which the option is being read out. Id. The system then executes commands to enable the selected mode or execute the selected function. See id. Read in light of Lawrence, Cohen reasonably suggests modifying Lawrence’s hearing aid to include a similar voice menu feature. A user of Lawrence’s hearing instruments would tap one of the surfaces of the hearing instruments to produce displacement data that triggers a voice menu. The voice menu would read out options in sequence. A user would make a selection of a desired mode and its associated commands by tapping a hearing instrument again, to produce displacement data in sync with reading out a desired option. In this way, the Lawrence-Cohen device would provide an indication of a menu of modes in response to receiving displacement data from a touch sensitive hearing instrument surface and then identify a command (e.g., execute hearing aid functions in an environmental mode) to execute based on a selection of a mode presented in the voice menu. For the foregoing reasons, the combination of the Lawrence and the Cohen references makes obvious all limitations of the claim. Claim 12 depends on claim 11 and further requires the following: “wherein: the first hearing instrument comprises a first touch responsive surface, the second hearing instrument comprises a second touch responsive surface, and “the first input data represents the user pressing and holding the first touch responsive surface of the first hearing instrument and the second input data represents the user tapping the second touch responsive surface of the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized tapping on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine tapping and long pressing as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 15 depends on claim 11 and further requires the following: “wherein the first input data is consistent with the user pressing and holding the first hearing instrument and the second input data is consistent with the user tapping the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized tapping on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine tapping and long pressing as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 16 depends on claim 11 and further requires the following: “wherein the first input data is consistent with the user pressing and holding the first hearing instrument and the second input data is consistent with the user pressing and holding the second hearing instrument.” Lawrence describes each hearing device 110, 120 as being touch sensitive in order to detect tapping, double tapping and long press gestures. Lawrence at ¶¶ 29, 91–93, FIGs.9A, 9B, 9C. Lawrence further describes that the gestures may be made up of any combination of gestures and from synchronized gestures made simultaneously at both hearing devices 110, 120. See id. at ¶¶ 29, 89, 97, FIGs.8A, 9A, 9C, 9G. One of ordinary skill would have understood this broad disclosure as including any combination of tapping, double tapping and long pressing at both of the hearing devices at the same time, including the claimed synchronized pressing on device 110 and pressing on device 120. See also id. at ¶ 113, FIG.13 (describing the determination of a variation measurement to detect the relative displacement of each hearing device in order to detect a gesture pattern). Alternatively, it would have been obvious to combine long pressing at both hearing devices as suggested by Lawrence’s broad disclosure and suggesting to combine the various features of the disclosure. For the foregoing reasons, the Lawrence reference anticipates all limitations of the claim. Claim 18 depends on claim 17 and further requires the following: “wherein: the command is a first command, “the first input data and the second input data are data regarding a previous synchronous input; and “the method further comprises: “modifying, by the processing system and based on the data regarding the previous synchronous input, the predetermined period of time; “obtaining, by the processing system and from the at least one of a first physical button or first touch responsive surface “determining, by the processing system, that the modified predetermined period of time has elapsed after obtaining the third input data and without obtaining fourth input data from the at least one of a second physical button or second touch responsive surface “based on the determining that the modified predetermined period of time has elapsed after obtaining the third input data and without obtaining further input data from the at least one of a second physical button or second touch responsive surface Lawrence describes storing a set of patterns for a set of recognized gestures. Lawrence further describes that a gesture may be a sequential gesture defined by a sequence of sub-gestures and a maximum period within which each sequential sub-gesture must be made. Thus, when Lawrence’s hearing devices detect a first sub-gesture of a sequential gesture, Lawrence sets/modifies the maximum period used to detect the next sub-gesture in the detected sequential gesture. Accordingly, Lawrence reasonably suggests that two sequential gestures may exist with overlapping sequences, but that differ in at least one degree distinguishable by an additional gesture sequence. For example, a first sequence may follow the pattern tap, press, tap (TPT) while a second sequence may follow the pattern tap, press, tap, tap (TPTT). One of ordinary skill would have immediately recognized this possibility from the common use of overloaded function signatures in programming languages, where a namespace includes multiple identically-named functions that are nevertheless distinguishable based on the number of arguments passed to each. For the foregoing reasons, the Lawrence reference makes obvious all limitations of the claim. Summary Claims 1 and 3–21 are rejected under at least one of 35 U.S.C. §§ 102 and 103 as being unpatentable over the cited prior art. In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Additional Citations The following table lists additional references. These references are not relied on in this Office action, but include information that is relevant to the subject matter disclosed in this Application. The Examiner advises reviewing these citations when preparing a response. Citation Relevance US 2014/0086438 Detects nodding gestures Table 4 Response to Applicant’s Arguments Applicant’s Reply (21 January 2026) has substantively amended all the claims. This Office action has updated the rejections accordingly. Applicant’s Reply at 9–13 includes comments pertaining to the rejections presented in this Office action. Applicant comments that Lawrence does not anticipate amended claim 1 because Lawrence’s sensors produce variation data, or differences in displacement between two hearing instruments while claim 1 requires obtaining first input data and second input data from a first or second physical button or first/second touch responsive surface. (Reply at 9–11). This comment is not persuasive. As shown in the updated rejection of claim 1, Lawrence’s hearing instruments contain sensors, such as accelerometers and gyroscopes that detect inertial movements of a hearing instrument. In particular, Lawrence describes the detection of a user tapping a surface of a hearing instrument due to the displacement the tapping induces on the inertial sensors. The physical connection between the surface of Lawrence’s hearing instruments and the inertial sensors turns the surfaces of the hearing instruments into touch-sensitive surfaces. And while it is true that Lawrence processes the displacement data to produce variation data, it is also true that Lawrence generates first and second inertial/displacement data from the tapping of a hearing instrument surface. Applicant further comments that Lawrence does not describe all features of claim 9 because Lawrence generates variation data, it does not generate first or second input data that is consistent with a user tilting his head. (Reply at 11–12). However, Applicant overlooks that the variation data is based on the displacement data generated by each of Lawrence’s hearing instruments. The displacement data is consistent with head tilting as explained in detail in Lawrence at ¶¶ 35, 82, 83, 109. Applicant’s remaining comments (Reply at 12–13) are either similar to those addressed above are not drawn with sufficient specificity to engender a response under 37 C.F.R. § 1.111(b). For the foregoing reasons, Applicant has not persuasively established any error in the Office action. All the rejections will be maintained. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 C.F.R. § 1.17(a)) pursuant to 37 C.F.R. § 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WALTER F BRINEY III whose telephone number is (571)272-7513. The examiner can normally be reached M-F 8 am-4:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached at 571-270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Walter F Briney III/ Walter F Briney IIIPrimary ExaminerArt Unit 2692 3/18/2026 1 Claim 7 is rejected alternatively here under § 103 in order to compact prosecution.
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Oct 17, 2025
Non-Final Rejection — §102, §103
Jan 21, 2026
Response Filed
Mar 18, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598444
Apparatus and Method for Rendering a Sound Scene Using Pipeline Stages
2y 5m to grant Granted Apr 07, 2026
Patent 12598442
AUTOMATIC LOUDSPEAKER DIRECTIVITY ADAPTATION
2y 5m to grant Granted Apr 07, 2026
Patent 12598412
Sound Signal Processing Method and Headset Device
2y 5m to grant Granted Apr 07, 2026
Patent 12587791
SOUND-GENERATING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12581245
LOUDSPEAKER
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
69%
With Interview (+3.8%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month