Prosecution Insights
Last updated: April 19, 2026
Application No. 17/850,891

METHOD AND SYSTEM FOR MONITORING THE PERFORMANCE OF A VOICE RECOGNITION ASSISTANCE SYSTEM IN A DATA SENSITIVE ENVIRONMENT

Non-Final OA §103
Filed
Jun 27, 2022
Examiner
KIM, ETHAN DANIEL
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Harman International Industries, Incorporated
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
83 granted / 107 resolved
+15.6% vs TC avg
Strong +30% interview lift
Without
With
+29.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
120
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
38.1%
-1.9% vs TC avg
§112
1.7%
-38.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 2. The information disclosure statement (IDS) submitted on June 27, 2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Continued Examination Under 37 CFR 1.114 3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on August 12, 2025 has been entered. Response to Amendments and Arguments 4. The amendment filed on August 12, 2025 has been entered. Claims 1-20 remain pending in the application. Claims 1, 11, and 19 are amended. As found in pages 8 and 9 of the applicant’s arguments and remarks made in an amendment, the applicant argues that Chang does not disclose the amended limitations of the independent claims. The examiner agrees with this assertion. Applicant’s arguments with respect to the 35 U.S.C. 103 rejections for claims 1-20 have been considered but are moot because the arguments are directed towards amended claim language, addressed on new grounds of rejection below. Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically taught as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claim 1-9 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chang (U.S. Publication No. 20190043499) in view of Shepherd (U.S. Publication No. 20150154291). Regarding claim 1, Chang discloses a method for monitoring performance of a voice recognition (VR) assistance system in a data sensitive environment ([0012] - the voice control device 100 is provided in a television or in a set-top box (STB) to receive voice data and then perform voice recognition), wherein the VR assistance system comprises one or more client devices and a server, the server comprising a monitoring component, the method comprising: determining, by at least one client device of the one or more client devices, client input data ([0012] - voice control device 100); processing, by the VR assistance system, the client input data ([0012] - the voice control device 100 is provided in a television or in a set-top box (STB) to receive voice data and then perform voice recognition, so as to accordingly control the operation of the television); and at least one of outputting or saving, by the monitoring component, the determined one or more anonymized performance indicator values ([0015] - The reading trigger mechanism may be the amount of valid data stored in the first memory 120 having reached a threshold, or after a predetermined time interval, or the first memory 120 has received a complete set of packet data. It should be noted that, “valid data” refers to unprocessed and non-deletable voice data but not non-deleted data that is still stored in the memory 120. In FIG. 2, the change in the amount of valid data stored in the first memory 120 can be observed. Voice data is constantly written in the first memory 120 (i.e., the amount of valid data stored increases) and the voice data is constantly being read by the voice processing circuit 130 (i.e., the amount of valid data stored decreases), such that the amount of valid data stored is maintained at a low level). However, Chang does not disclose selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system; determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. Shepherd does teach selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system ([0012] - the individual engagement metrics may be received anonymously… For example, individual engagement metrics may be selected from the group consisting of: …voice interaction by the participant during the virtual collaboration session. [0095] - individual participant's engagement metrics include speaking time 501, email 502, web browsing 503, and engagement score 504). determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. ([0012] - the individual engagement metrics may be received anonymously… For example, individual engagement metrics may be selected from the group consisting of: …voice interaction by the participant during the virtual collaboration session. [0095] - individual participant's engagement metrics include speaking time 501, email 502, web browsing 503, and engagement score 504). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chang to incorporate the teachings of Shepherd in order to implement selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system; determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. Doing so allows the data to be combined with additional data from team engagement metrics to provide coaching for meeting or session organizers or initiators to provide suggestions that aim to improve effectiveness over time ([Shepherd [0101]). Regarding claim 2, Chang in view of Shepherd discloses all limitations of claim 1, above. Chang discloses the method, wherein the one or more anonymous performance indicators are consistent with predetermined general data protection regulations ([0030] - More specifically, the security control circuit 470 may set a part of the first memory 420 as a security protection area, and the receiving circuit 410 stores the received voice data in the security protection area, which is permitted to be accessed by the voice processing circuit 430 and the memory controller 440 for reading and writing operations. Similarly, the security control circuit 470 may also set a part of the second memory 450 as a security protection area, and the memory controller 440 stores the voice data from the first memory 420 to the security protection area, which is permitted to be accessed only by the main processing circuit 460 for reading and writing operations). Regarding claim 3, Chang in view of Shepherd discloses all limitations of claim 1, above. Chang discloses the method, wherein determining, by the monitoring component, one or more anonymized performance indicator values comprises increasing one or more performance indicator counters, in particular, a plurality of respective counters for a plurality of time intervals ([0017] - Thus, in FIG. 2, it is seen that, starting from the time point t2, the amount of valid data stored in the memory 120 is continually increased to a higher level). Regarding claim 4, Chang in view of Shepherd discloses all limitations of claim 3, above. Chang discloses the method, wherein the one or more performance indicator counters are indicative of a processing efficiency of the VR assistance system, processing time of the VR assistance system, reply performance of the VR assistance system, processing errors of the VR assistance system, a capacity of the one or more client devices, a power usage of the one or more client devices, a capacity of the server, or a power usage of the server ([0019] - when the voice device 100 is in an idle state, only the receiving circuit 110, the first memory 120 and the voice processing circuit 130 need to be in an enabled state, and the voice processing circuit 130 is designed to be able to recognize only the voice data of the predetermined command “Hello, MStar.”. Therefore, these elements needing to be in an enabled state over an extended period of time require minimal power consumption. In contrast, the elements requiring more power consumption, e.g., the main processing circuit 160, can enter a hibernation state when idle, thus significantly reducing power consumption). Regarding claim 5, Chang in view of Shepherd discloses all limitations of claim 3, above. Chang discloses the method, wherein the one or more performance indicator counters are indicative of a usage behavior of the one or more client devices, a request intensity of the one or more client devices, one or more client input data types, software performance of the one or more client devices, or hardware performance or the one or more client devices ([0020] - the voice processing circuit 130 may be switched to a hibernation state (e.g., power is disconnected or an extremely low power is supplied) to further save power, and is again woken up after the main processing circuit 160 again enters hibernation. In another embodiment, because the voice processing circuit 130 is a low power consuming element, it can be selectively designed to remain in an enabled state). Regarding claim 6, Chang in view of Shepherd discloses all limitations of claim 4, above. Chang discloses the method, wherein the one or more performance indicator counters are indicative of an occurrence of a language of the client input data ([0015] - The reading trigger mechanism may be the amount of valid data stored in the first memory 120 having reached a threshold, or after a predetermined time interval, or the first memory 120 has received a complete set of packet data. It should be noted that, “valid data” refers to unprocessed and non-deletable voice data but not non-deleted data that is still stored in the memory 120. In FIG. 2, the change in the amount of valid data stored in the first memory 120 can be observed. Voice data is constantly written in the first memory 120 (i.e., the amount of valid data stored increases) and the voice data is constantly being read by the voice processing circuit 130 (i.e., the amount of valid data stored decreases), such that the amount of valid data stored is maintained at a low level). Regarding claim 7, Chang in view of Shepherd discloses all limitations of claim 1, above. Chang discloses the method, further comprising: comparing, by the monitoring component, the one or more anonymized performance indicator values to one or more previously determined anonymized performance indicator values or one or more previously determined performance indicator threshold values ([0015] - The reading trigger mechanism may be the amount of valid data stored in the first memory 120 having reached a threshold, or after a predetermined time interval, or the first memory 120 has received a complete set of packet data. It should be noted that, “valid data” refers to unprocessed and non-deletable voice data but not non-deleted data that is still stored in the memory 120. In FIG. 2, the change in the amount of valid data stored in the first memory 120 can be observed. Voice data is constantly written in the first memory 120 (i.e., the amount of valid data stored increases) and the voice data is constantly being read by the voice processing circuit 130 (i.e., the amount of valid data stored decreases), such that the amount of valid data stored is maintained at a low level). Regarding claim 8, Chang in view of Shepherd discloses all limitations of claim 1, above. Chang discloses the method, further comprising: generating, by the VR assistance system, client output data based on the processed client input data ([0030] - the security control circuit 470 may set a part of the first memory 420 as a security protection area, and the receiving circuit 410 stores the received voice data in the security protection area, which is permitted to be accessed by the voice processing circuit 430 and the memory controller 440 for reading and writing operations. Similarly, the security control circuit 470 may also set a part of the second memory 450 as a security protection area, and the memory controller 440 stores the voice data from the first memory 420 to the security protection area, which is permitted to be accessed only by the main processing circuit 460 for reading and writing operations); outputting, by the at least one client device, the client output data ([0030] - the security control circuit 470 may set a part of the first memory 420 as a security protection area, and the receiving circuit 410 stores the received voice data in the security protection area, which is permitted to be accessed by the voice processing circuit 430 and the memory controller 440 for reading and writing operations. Similarly, the security control circuit 470 may also set a part of the second memory 450 as a security protection area, and the memory controller 440 stores the voice data from the first memory 420 to the security protection area, which is permitted to be accessed only by the main processing circuit 460 for reading and writing operations); and deleting, by the VR assistance system, the client input data and the client output data ([0030] - the security control circuit 470 may set a part of the first memory 420 as a security protection area, and the receiving circuit 410 stores the received voice data in the security protection area, which is permitted to be accessed by the voice processing circuit 430 and the memory controller 440 for reading and writing operations. Similarly, the security control circuit 470 may also set a part of the second memory 450 as a security protection area, and the memory controller 440 stores the voice data from the first memory 420 to the security protection area, which is permitted to be accessed only by the main processing circuit 460 for reading and writing operations). Regarding claim 9, Chang in view of Shepherd discloses all limitations of claim 1, above. Chang discloses the method, wherein the monitoring component is comprised in a separate docker container on the server ([0030] - the security control circuit 470 sets access permission of the first memory 420 and/or the second memory 450, so as to prevent theft of the voice data stored in the first memory 420 and/or the second memory 45). Regarding claim 11, Chang discloses a monitoring component for use in a voice recognition (VR) assistance system ([0012] - the voice control device 100 is provided in a television or in a set-top box (STB) to receive voice data and then perform voice recognition), wherein the monitoring component is configured to: at least one of output or save the determined one or more anonymized performance indicator values ([0015] - The reading trigger mechanism may be the amount of valid data stored in the first memory 120 having reached a threshold, or after a predetermined time interval, or the first memory 120 has received a complete set of packet data. It should be noted that, “valid data” refers to unprocessed and non-deletable voice data but not non-deleted data that is still stored in the memory 120. In FIG. 2, the change in the amount of valid data stored in the first memory 120 can be observed. Voice data is constantly written in the first memory 120 (i.e., the amount of valid data stored increases) and the voice data is constantly being read by the voice processing circuit 130 (i.e., the amount of valid data stored decreases), such that the amount of valid data stored is maintained at a low level). However, Chang does not disclose selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system; determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. Shepherd does teach selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system ([0012] - the individual engagement metrics may be received anonymously… For example, individual engagement metrics may be selected from the group consisting of: …voice interaction by the participant during the virtual collaboration session. [0095] - individual participant's engagement metrics include speaking time 501, email 502, web browsing 503, and engagement score 504). determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. ([0012] - the individual engagement metrics may be received anonymously… For example, individual engagement metrics may be selected from the group consisting of: …voice interaction by the participant during the virtual collaboration session. [0095] - individual participant's engagement metrics include speaking time 501, email 502, web browsing 503, and engagement score 504). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chang to incorporate the teachings of Shepherd in order to implement selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system; determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. Doing so allows the data to be combined with additional data from team engagement metrics to provide coaching for meeting or session organizers or initiators to provide suggestions that aim to improve effectiveness over time ([Shepherd [0101]). Dependent claims 12-18 are analogous in scope to claims 2-7 and 9, and are rejected according to the same reasoning. Regarding claim 19, Chang discloses a voice recognition (VR) assistance system ([0012] - the voice control device 100 is provided in a television or in a set-top box (STB) to receive voice data and then perform voice recognition), the VR assistance system comprising: one or more client devices ([0012] - voice control device 100); and a server, the server comprising a monitoring component; wherein the monitoring component is configured to perform a method comprising: processing client input data of at least one of the one or more client devices ([0012] - the voice control device 100 is provided in a television or in a set-top box (STB) to receive voice data and then perform voice recognition, so as to accordingly control the operation of the television); and at least one of outputting or saving the determined one or more anonymized performance indicator values ([0015] - The reading trigger mechanism may be the amount of valid data stored in the first memory 120 having reached a threshold, or after a predetermined time interval, or the first memory 120 has received a complete set of packet data. It should be noted that, “valid data” refers to unprocessed and non-deletable voice data but not non-deleted data that is still stored in the memory 120. In FIG. 2, the change in the amount of valid data stored in the first memory 120 can be observed. Voice data is constantly written in the first memory 120 (i.e., the amount of valid data stored increases) and the voice data is constantly being read by the voice processing circuit 130 (i.e., the amount of valid data stored decreases), such that the amount of valid data stored is maintained at a low level). However, Chang does not disclose selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system; determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. Shepherd does teach selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system ([0012] - the individual engagement metrics may be received anonymously… For example, individual engagement metrics may be selected from the group consisting of: …voice interaction by the participant during the virtual collaboration session. [0095] - individual participant's engagement metrics include speaking time 501, email 502, web browsing 503, and engagement score 504). determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. ([0012] - the individual engagement metrics may be received anonymously… For example, individual engagement metrics may be selected from the group consisting of: …voice interaction by the participant during the virtual collaboration session. [0095] - individual participant's engagement metrics include speaking time 501, email 502, web browsing 503, and engagement score 504). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chang to incorporate the teachings of Shepherd in order to implement selecting, by the monitoring component, one or more anonymized performance indicators from a plurality of categories of anonymized performance indicators of the VR assistance system; determining, by the monitoring component, one or more anonymized performance indicator values for the one or more selected anonymized performance indicators during the processing of the client input data without assembling or using personal client data. Doing so allows the data to be combined with additional data from team engagement metrics to provide coaching for meeting or session organizers or initiators to provide suggestions that aim to improve effectiveness over time ([Shepherd [0101]). Dependent claims 20 is analogous in scope to claim 2 and is rejected according to the same reasoning. 7. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Chang (U.S. Publication No. 20190043499) in view of Shepherd (U.S. Publication No. 20150154291) in view of Schuster (U.S. Patent No. 9542947). Regarding claim 10, Chang in view of Shepherd teaches all limitations of claim 1, above. However, Chang does not disclose the method, wherein the VR assistance system is a multi-language VR assistance system. Schuster does teach the method, wherein the VR assistance system is a multi-language VR assistance system ([Col 13, Rows 45-47] - The selector module 416, in one embodiment, is configured to receive data, including a language 560 spoken by the user). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chang in view of Shephered to incorporate the teachings of Schuster in order to implement the method, wherein the VR assistance system is a multi-language VR assistance system. Doing so allows for more efficient execution of voice recognition processing, reduced computational load, and minimization of processing delays (Chang [Col 15, Rows 32-34]). Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Biswal (U.S. Publication No. 20170287490) teaches speaker recognition using adaptive thresholding. Nagatomo (U.S. Publication No. 20120215528) teaches a speech recognition system, speech recognition request device, speech recognition method, speech recognition program, and recording medium. Sheaffer (U.S. Publication No. 20200105291) teaches real-time feedback during audio recording, and related devices and systems. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ETHAN DANIEL KIM whose telephone number is (571) 272-1405. The examiner can normally be reached on Monday - Friday 9:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached on (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ETHAN DANIEL KIM/ Examiner, Art Unit 2658 /RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Jun 27, 2022
Application Filed
Nov 14, 2024
Non-Final Rejection — §103
Feb 19, 2025
Response Filed
May 27, 2025
Final Rejection — §103
Jul 23, 2025
Response after Non-Final Action
Aug 12, 2025
Request for Continued Examination
Aug 14, 2025
Response after Non-Final Action
Oct 21, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597414
GENERATION OF TRAINING EXAMPLES FOR TRAINING AUTOMATIC SPEECH RECOGNIZERS
2y 5m to grant Granted Apr 07, 2026
Patent 12596874
OPERATION ERROR DETECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12573384
DEVICE CONTROL SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12566922
KNOWLEDGE ACCELERATOR PLATFORM WITH SEMANTIC LABELING ACROSS DIFFERENT ASSETS
2y 5m to grant Granted Mar 03, 2026
Patent 12562183
DEEP LEARNING FOR JOINT ACOUSTIC ECHO AND ACOUSTIC HOWLING SUPPRESSION IN HYBRID MEETINGS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+29.5%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month