Prosecution Insights
Last updated: April 19, 2026
Application No. 18/740,555

VOICE COMMAND RECEIVING DEVICE, VOICE COMMAND RECEIVING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §103
Filed
Jun 12, 2024
Examiner
ROBERTS, SHAUN A
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Jvckenwood Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
491 granted / 647 resolved
+13.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 647 resolved cases

Office Action

§103
DETAILED ACTION 1. This action is responsive to Application no.18/740,555 filed 6/12/2024. All claims have been examined and are currently pending. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 3. Priority documents for JP2021-209852 have not yet been received (see priority document exchange failure status report 8/8/2024). Information Disclosure Statement 4. The information disclosure statement (IDS) submitted is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification 5. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Interpretation 35 U.S.C. 112(f) 6. Claim limitations have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because they use a generic placeholder coupled with functional language without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. Since the claim limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claims 1-9 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: fig 1; 11 para: [0139]. If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112 , sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 9. Claims 1-6, 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Gowda et al (2023/0035752) in view of Doyle et al (2006/0265223). Regarding claim 1 Gowda et al (2023/0035752) teaches A voice command receiving device ([0001] The present disclosure generally relates to systems and methods of responding to audible commands and/or adjusting vehicle components.) comprising: a voice command receiving unit configured to receive a voice command (4; [0029] The vehicle 12 includes an audio device 24. The audio device 24 is configured to receive an audible command from an occupant of the vehicle 12; 0030); a detection unit configured to, in an environment in which the voice command is uttered, detect a condition leading to a situation in which a voice command is not properly recognizable (0032: the controller 16 is configured to generate and/or use one or both of the first confidence score and/or the second confidence score depending on whether one or more threshold is met and/or a domain match is made; [0061] In an embodiment, the context data can indicate that an occupant behavior caused the first output to not meet the first threshold. For example, the occupant state module 60c can be configured to detect if the occupant covered his or her mouth or yawned while speaking the audible command as discussed herein. In this case, the method 100 at step 108 can use this occupant data to attribute the failure to meet the first threshold at step 106 as being caused by the occupant’s mouth covering or yawn due to mispronunciation of an audible command that is likely correct. Thus, in an embodiment, the controller 16 proceeds to step 118 upon determining that an occupant behavior (e.g., a yawn) occurred during speaking of the audible command.); and an implementation control unit configured to, when the voice command receiving unit receives a voice command, implement a function with respect to the received voice command ([0031] The audio device 24 is configured to generate command data based on an audible command.; ASR; NLU; 0040: adjusting vehicle components based on audible commands; 53), wherein the voice command receiving unit is configured to: when the detection unit determines absence of a condition leading to a situation in which a voice command is not properly recognizable, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a first threshold value (0032: the controller 16 is configured to generate and/or use one or both of the first confidence score and/or the second confidence score depending on whether one or more threshold is met and/or a domain match is made; [0039] In an embodiment, the controller 16 is programmed to access context data relating to the current state of the vehicle 12 and use the context data for generation of at least one confidence score as discussed herein. In an embodiment, a first confidence score is generated using the command data and not the context data; 51-54 – recognition without context data and compared to first threshold; 0054: first threshold; first confidence score; 0063); and when the detection unit determines presence of a condition leading to a situation in which a voice command is not properly recognizable, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a second threshold value {that is smaller than the first threshold value} ([0039] In an embodiment, the controller 16 is programmed to access context data relating to the current state of the vehicle 12 and use the context data for generation of at least one confidence score as discussed herein. In an embodiment, a first confidence score is generated using the command data and not the context data, and a second confidence score is generated using the command data and the context data. In an embodiment, the controller 16 generates the second confidence score when the context data corresponds to at least a portion of the command data; [0061] In an embodiment, the context data can indicate that an occupant behavior caused the first output to not meet the first threshold. For example, the occupant state module 60c can be configured to detect if the occupant covered his or her mouth or yawned while speaking the audible command as discussed herein. In this case, the method 100 at step 108 can use this occupant data to attribute the failure to meet the first threshold at step 106 as being caused by the occupant’s mouth covering or yawn due to mispronunciation of an audible command that is likely correct. Thus, in an embodiment, the controller 16 proceeds to step 118 upon determining that an occupant behavior (e.g., a yawn) occurred during speaking of the audible command.; 62-64; 0065: second threshold; second confidence score); But does not specifically teach where Doyle et al (2006/0265223) teaches a second threshold value that is smaller than the first threshold value (Abstract; rejection threshold reduced; 17; [0024] If the measurement of the input signal quality is low, the rejection threshold can be reduced). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Doyle for improved speech recognition for different context and situations. Gowda already teaches recognizing user or environmental situations that could lead to poor recognition, and incorporating context and multiple thresholds to still perform and complete recognition. Doyle also teaches adjusting thresholds to accommodate different conditions when attempting to perform recognition, and one could thus look to Doyle to allow the second threshold of Gowda to be smaller than the first threshold value for improved recognition, allowing the recognition to still be performed when faced with non-ideal circumstances and still presenting a reasonable expectation of success. The implementation allowing: the user can be better guided should a problem occur, which in turn will increase perceived recognition engine performance (Doyle et al 0017); and reduce confusion in voice interaction, particularly when there is an error (e.g., an ASR or NLU error), by instructing the speaker as to what corrective measures to take in a follow-up audible command. The present disclosure also provides systems and methods which use the surrounding context of the vehicle to help interpret the type of error. In doing so, the present disclosure allows for clarity for the user in picking the right dialog repair strategy (Gowda 0003). Regarding claim 2 Gowda teaches The voice command receiving device according to claim 1, wherein as a condition leading to a situation in which a voice command is not properly recognizable, the detection unit is configured to detect orientation of face of a person who utters the voice command (0035; [0047] In an embodiment, the context data related to the current occupant of the vehicle 12 includes data from an image taken of the current occupant by an image sensor 30. From the image, the occupant state module 60c is configured to detect conditions such as the occupant’s line of sight and/or head position based on eye and/or head direction; 0061), and the voice command receiving unit is configured to: when the detection unit determines that orientation of face of the person is toward a microphone which acquires uttered voice of the voice command, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a first threshold value (51-54 – recognition without context and using first threshold); and when the detection unit determines that orientation of face of the person is not toward the microphone, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a second threshold value {that is smaller than the first threshold value} (0061-65 – recognition with context and using second threshold). Rejected for similar rationale and reasoning as claim 1, where Doyle teaches second threshold smaller than the first Regarding claim 3 Gowda teaches The voice command receiving device according to claim 1, wherein as a condition leading to a situation in which a voice command is not properly recognizable, the detection unit is configured to detect presence or absence of an object covering mouth region of a person who utters the voice command ([0061] In an embodiment, the context data can indicate that an occupant behavior caused the first output to not meet the first threshold. For example, the occupant state module 60c can be configured to detect if the occupant covered his or her mouth or yawned while speaking the audible command as discussed herein. In this case, the method 100 at step 108 can use this occupant data to attribute the failure to meet the first threshold at step 106 as being caused by the occupant’s mouth covering or yawn due to mispronunciation of an audible command that is likely correct. Thus, in an embodiment, the controller 16 proceeds to step 118 upon determining that an occupant behavior (e.g., a yawn) occurred during speaking of the audible command.), and the voice command receiving unit is configured to: when the detection unit determines absence of an object covering mouth region of the person, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a first threshold value (51-54); and when the detection unit determines presence of an object covering mouth region of the person, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a second threshold value {that is smaller than the first threshold value} (61-65). Rejected for similar rationale and reasoning as claim 1, where Doyle teaches second threshold smaller than the first Regarding claim 4 Gowda teaches The voice command receiving device according to claim 1, wherein as a condition leading to a situation in which a voice command is not properly recognizable, the detection unit is configured to detect volume level of background sound of environment in which the voice command is received (0036: noise sensor; [0060] As an example, an emergency vehicle driving in the same lane as the vehicle 12 may cause the driver to require a volume adjustment of an in-vehicle system or a window adjustment to decrease the noise. The context module 50 is configured to generate context data corresponding to the emergency vehicle, for example, by detecting the siren of the emergency vehicle or other traffic data related to the emergency vehicle’s presence), and the voice command receiving unit is configured to: when volume level of the background sound is determined to be lower than a predetermined value, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a first threshold value (51-54); and when volume level of the background sound is determined to be equal to or higher than the predetermined value, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a second threshold value {that is smaller than the first threshold value} (61-65). Rejected for similar rationale and reasoning as claim 1, where Doyle teaches second threshold smaller than the first Regarding claim 5 Doyle et al (2006/0265223) teaches The voice command receiving device according to claim 1, wherein as a condition leading to a situation in which a voice command is not properly recognizable, the detection unit is configured to detect volume level of uttered voice of the voice command (24: if input signal quality is low; 27: quality of input signal; [0067] The ASR system 310 includes an input signal quality measuring means 320 which quantifies the quality of the input signal. A rejection threshold adjustment means 322 is provided in a statistical pattern matcher 324 of the ASR system 310. [0068] An input signal quality measure can be analyzed as follows.; 70 loudness), and the voice command receiving unit is configured to: when volume level of uttered voice of the voice command is determined to be equal to or higher than a predetermined value, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a first threshold value (0024); and when volume level of uttered voice of the voice command is determined to be lower than a predetermined value, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a second threshold value that is smaller than the first threshold value ([0024] If the measurement of the input signal quality is low, the rejection threshold can be reduced and, if the measurement of the input signal quality is high, the rejection threshold can be increased; 29; 0111). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate uttered voice characteristics (of Doyle) to incorporate additional circumstances that could affect voice recognition and adapt the thresholds accordingly for improved speech recognition in various circumstances and conditions. Gowda already teaches a list of conditions that can be recognized that could impact recognition and adjusting thresholds. Thus, one could look to Doyle to further incorporate additional information about the conditions for additional compensation for improved speech recognition to ensure recognition can still be carried out in various circumstances and scenarios. Regarding claim 6 Doyle teaches The voice command receiving device according to claim 1, wherein as a condition leading to a situation in which a voice command is not properly recognizable, the detection unit is configured to detect volume level difference between volume level of background sound of environment in which the voice command is received and volume level of uttered voice of the voice command (0069: SNR), and the voice command receiving unit is configured to: when volume level of uttered voice of the voice command is determined to be higher by the volume level difference equal to or greater than a predetermined value, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a first threshold value (24; 29); and when the volume level difference is determined to be smaller than a predetermined value or when volume level of the background sound of environment, in which the voice command is received, is determined to be greater by a difference equal to or greater than a predetermined value, receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a second threshold value that is smaller than the first threshold value (24; 29; [0067] The ASR system 310 includes an input signal quality measuring means 320 which quantifies the quality of the input signal. A rejection threshold adjustment means 322 is provided in a statistical pattern matcher 324 of the ASR system 310. [0068] An input signal quality measure can be analyzed as follows. [0069] (1) SNR (signal-to-noise) measure: A signal-to-noise estimate is the ratio between good candidate signal (speech) to background noise. A high value suggests that the signal is well differentiated from the noise; a low value, that signal and noise are less distinguishable. The SNR affects later processing. The SNR can be estimated by standard signal processing techniques which usually involve frequency as well as time domain analyzes.; 74-75; 89; 94; 111). Rejected for similar rationale and reasoning as claim 5 Regarding claim 10 Gowda and Doyle teach A voice command receiving method implemented in a voice command receiving device, comprising: detecting, in an environment in which a voice command is uttered, a condition leading to a situation in which a voice command is not properly recognizable; receiving, when it is determined to have absence of a condition leading to a situation in which a voice command is not properly recognizable, a voice command at a voice recognition rate, which is regarding the voice command, equal to or greater than a first threshold value; receiving, when it is determined to have presence of a condition leading to a situation in which a voice command is not properly recognizable, a voice command at a voice recognition rate, which is regarding the voice command, equal to or greater than a second threshold value that is smaller than the first threshold value; and implementing, when the voice command is received, a function with respect to the received voice command. Claim recites limitations similar to claim 1 and is rejected for similar rationale and reasoning Regarding claim 11 Gowda and Doyle teach A non-transitory computer-readable storage medium storing a computer program causing a computer to execute: detecting, in an environment in which a voice command is uttered, a condition leading to a situation in which a voice command is not properly recognizable; receiving, when it is determined to have absence of a condition leading to a situation in which a voice command is not properly recognizable, a voice command at a voice recognition rate, which is regarding the voice command, equal to or greater than a first threshold value; receiving, when it is determined to have presence of a condition leading to a situation in which a voice command is not properly recognizable, a voice command at a voice recognition rate, which is regarding the voice command, equal to or greater than a second threshold value that is smaller than the first threshold value; and implementing, when the voice command is received, a function with respect to the received voice command. Claim recites limitations similar to claim 1 and is rejected for similar rationale and reasoning 10. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Gowda et al (2023/0035752) in view of Doyle et al (2006/0265223) in further view of Tang 2017/0025121. Regarding claim 7 Gowda and Doyle teach The voice command receiving device according to claim 1, wherein as a condition leading to a situation in which a voice command is not properly recognizable, {the detection unit is configured to detect distance between a microphone, which acquires uttered voice of the voice command, and a person who utters the voice command,} and the voice command receiving unit is configured to: {when the detection unit determines that the distance is shorter than a predetermined distance,} receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a first threshold value; and {when the detection unit determines that the distance is equal to or longer than the predetermined distance,} receive a voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a second threshold value that is smaller than the first threshold value. Rejected for similar rationale and reasoning as claim 1, But does not specifically teach where Tang 2017/0025121 teaches as a condition leading to a situation in which a voice command is not properly recognizable, the detection unit is configured to detect distance between a microphone, which acquires uttered voice of the voice command, and a person who utters the voice command (8; 20; 55-57: mobile terminal acquires a distance to a user, and determines, according to the distance), and the voice command receiving unit is configured to: when the detection unit determines that the distance is shorter than a predetermined distance (57: if the distance to the user is less); and when the detection unit determines that the distance is equal to or longer than the predetermined distance (57 if the distance to the user is not less), It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate distance characteristics (of Tang) to incorporate additional circumstances that could affect voice recognition and adapt the thresholds accordingly for improved speech recognition in various circumstances and conditions. Gowda already teaches a list of conditions that can be recognized that could impact recognition and adjusting thresholds. Thus, one could look to Tang to further incorporate additional information about the conditions for additional compensation for improved speech recognition to ensure recognition can still be carried out in various circumstances and scenarios, which can perform speech collection and recognition in a more flexible manner, and improve a recognition rate of a speech signal (Tang 0005). 11. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Gowda et al (2023/0035752) in view of Doyle et al (2006/0265223) in further view of Fountaine 2018/0325470. Regarding claim 8 Gowda does not specifically teach where Fountaine teaches The voice command receiving device according to claim 1, wherein, with respect to a voice command having high urgency or high immediacy, the voice command receiving unit is configured to receive the voice command at a voice recognition rate, which is regarding voice commands acquired by the voice command receiving unit, equal to or greater than a second threshold value that is smaller than the first threshold value ([0084] The emergency speech recognition engine 560 may be similar to the speech recognition engine 460 but with enhanced and/or emphasized capability to recognize, interpret and/or identify an emergency speech 561 of the user 100. For example the emergency speech 561 may be: an emergency word such as “help!” (e.g., the emergency word 1110 of FIG. 11); a speech tone tending to indicate anxiety, pain, or suffering; and/or abnormally rapid, loud or incoherent speech. The emergency speech recognition engine 560 may also have a lower threshold for recognizing certain words commonly associated with danger or injury to the user 100, for example “medication”, “dizzy”, “injury”, “smoke”, explitives, and similar words. In one or more embodiments, the emergency speech recognition engine 560 may simultaneously receive the voice communication 110 of the user 100 at the same time as the speech recognition engine 460; however, upon meeting a threshold number of words associated with an emergency, danger or injury to the user 100 the automated emergency assistance engine 562 may communicate with the assistance coordinator 214 to route communications primarily to the emergency server 500 and/or initiate frequent instances of the status query 107 to the user 100.). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate important terms (of Fountaine) to incorporate additional circumstances that could affect voice recognition and adapt the thresholds accordingly for improved speech recognition in various circumstances and conditions, and for further voice controlled assistance for monitoring adverse events of a user and/or coordinating emergency actions such as caregiver communication.(0002 Fountaine) 12. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Gowda et al (2023/0035752) in view of Doyle et al (2006/0265223) in further view of Liao (2014/0354881). Regarding claim 9 Gowda does not specifically teach where Liao teaches The voice command receiving device according to claim 1, wherein the voice command receiving device is a recording control device used in a vehicle (0018), and further comprises a video data acquiring unit configured to acquire first video data taken by a first photographing unit which takes photograph of surrounding of the vehicle (0018), the voice command receiving unit is configured to receive an event recording instruction via a voice command (0018), and the implementation control unit is configured to, when the voice command receiving unit receives an event recording instruction via a voice command, store as event data the first video data capturing point of time of receiving the event recording instruction ([0018] When the voice recognition module 317 identifies a voice command from the user, it sends the command to the processor 313, the processor 313 controls the user interface module 315 to display the image of the corresponding driving assistance software and start the driving assistance software to execute actions according the voice command. The driving assistance software is capable of recording an image in the front of the vehicle 100 via the camera 33, and record sound within the vehicle 100 via the recording components of the portable smart device 30. The processor 313 transfers the recordings of the images and the sound to the microprocessor 181. The microprocessor 181 transfers the above-mentioned recordings to the storage module 187 to be saved.). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate voice-initiated recording of Liao for an improved system to allow for additional vehicular components to be operated using voice. Gowda already teaches voice commands for adjustable vehicle components, and one could look to Lao to provide an additional adjustable vehicle component for improved vehicle operation and presenting a reasonable expectation of success. Conclusion 13. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN A ROBERTS whose telephone number is (571)270-7541. The examiner can normally be reached Monday-Friday 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAUN ROBERTS/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Jun 12, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586599
AUDIO SIGNAL PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM WITH MACHINE LEARNING AND FOR MICROPHONE MUTE STATE FEATURES IN A MULTI PERSON VOICE CALL
2y 5m to grant Granted Mar 24, 2026
Patent 12586568
SYNTHETICALLY GENERATING INNER SPEECH TRAINING DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12573376
Dynamic Language and Command Recognition
2y 5m to grant Granted Mar 10, 2026
Patent 12562157
GENERATING TOPIC-SPECIFIC LANGUAGE MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12555562
VOICE SYNTHESIS FROM DIFFUSION GENERATED SPECTROGRAMS FOR ACCESSIBILITY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+10.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 647 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month