Prosecution Insights
Last updated: April 19, 2026
Application No. 18/425,025

METHOD AND SYSTEM FOR SOUND MONITORING OVER A NETWORK

Final Rejection §103§112§DP
Filed
Jan 29, 2024
Examiner
ZHANG, LESHUI
Art Unit
2695
Tech Center
2600 — Communications
Assignee
St Fam Tech LLC
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
719 granted / 928 resolved
+15.5% vs TC avg
Strong +36% interview lift
Without
With
+36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
47 currently pending
Career history
975
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
28.7%
-11.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 928 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION The present application is being examined under the pre-AIA first to invent provisions. This Office Action is in response to the claim amendment filed on August 27, 2025 and wherein claims 1, 4-5, 11, 14-15 amended. In virtue of this communication, claims 1-20 are currently pending in this Office Action. With respect to the objection of drawings due to formality issues, as set forth in the previous Office Action, applicant argument, see paragraphs 1-4 of page 1 and paragraph 1 of page 2 in Remarks Continuation “V. Drawing” filed on August 27, 2025, has been fully considered and the argument is persuasive and wherein the claimed “metadata” is interpreted as information including “GPS TIME” or “time information”, “Geographic position” or geographic information”, “type of sound”, etc. Therefore, the objection of drawings due to the formality issues, as set forth in the previous Office Action, has been withdrawn. With respect to the non-statutory obvious-type double patenting rejection of claims 1, 11 as being unpatentable over conflicting claims 1, 8-9, 13, 17 of U.S. Patent No. 10,419,863 B2, the non-statutory obvious-type double patenting rejection of claims 1-20 as being unpatentable over conflicting claims 1-6, 10-11 of U.S. Patent No. 11,039,259 B2, and in view of Shalon, Schuler, Radomski, as set forth in the previous Office Action, the Terminal Disclaimer submitted on August 27, 2025 is acknowledged and approved, and argument, see paragraph 2 of page 3 in Remarks Continuation “V. Drawing” filed on August 27, 2025 have been fully considered and the argument found persuasive and therefore, the non-statutory obvious-type double patenting rejection of claims 1, 11 as being unpatentable over conflicting claims 1, 8-9, 13, 17 of U.S. Patent No. 10,419,863 B2, the non-statutory obvious-type double patenting rejection of claims 1-20 as being unpatentable over conflicting claims 1-6, 10-11 of U.S. Patent No. 11,039,259 B2, and in view of Shalon, Schuler, Radomski, as set forth in the previous Office Action, has been withdrawn. With respect to the rejection of claims 1-20 under 35 USC §112(b), as set forth in the previous Office Action, the claim amendment, and argument, see paragraphs 1-2 of page 4 in Remarks Continuation “V. Drawing” filed on August 27, 2025 have been fully considered and the argument is persuasive. Therefore, the rejection of claims 1-20 under 35 USC § 112(b), as set forth in the previous Office Action, has been withdrawn. The Office appreciates the explanation of the amendment and analyses of the prior arts, and however, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145. Information Disclosure Statement The text of those sections of Title above not included in this action can be found in a prior Office action filed on June 6, 2025. Applicant argued “The relevance of the art can be determined by reviewing the arguments in cited IPR2022-01106 with respect to the parent of the present application, and arguments on similar scope claims in IPR2022-00242, IPR2022-00243, IPR2022-00324, and IPR2022-00234”, as asserted in paragraph 5 of page 1. It is respectfully noted that the Office couldn’t locate the indicated “IPR2022-xxxxx” above in the record and couldn’t find page number and/or line number with concise explanation of how the information is understood to be relevant, why the information is being submitted, and whether one or more are lightly relevant with pinpoint citations to those pages and lines as required according to MPEP 609.04(a)(III), referred to the IDS items including “Prof. Chris Kyriakakis” contact info, foreign patent JPH 10162283, NPL appendix 14A-14C for US Patent No. 11,244,666 to Samsung’s Invalidity Contentions …”, etc., as listed the submitted IDS and indicated in the previous office action. Because applicant failed to provide the relevant information addressed in the previous office action, the comment for IDSs as set forth in the previous office action maintained. Specification The application specification failed to disclose the claimed feature of “transmitting the metadata, the sensor signal and the acoustic signal to the server via …” and “analyzing the acoustic signal to determine if the acoustic signal contains a voice signal and if so then extracting features from the voice signal” in claims 1, 11. The Office thanked and appreciated applicant efforts in describing the disclosed “tags and features (or acoustic signal)” that should be interpreted as “tags+features” or “tags+(acoustic) signal”, see paragraphs 4-5 of page 2 of Remarks filed on August 27, 2025 or “audio content” and of that the metadata packet with “other information” could be … “ sent to “memory” and “database 614” and further indicated “the sound pressure level (feature), metadata, and other information could be sent immediately by mobile device 600 … to database 614 (para 79)”, see paragraphs 2-6 of page 3 in Remarks filed on August 27, 2025, and “the other information” herein, as disclosed is nothing more than “type of sound detected”, etc., (para 89). Applicant further described that “sensor signal” such as “biological, acceleration/velocity, odor, chemical detection, visual, etc., ) is specifically to be analyzed for “trigger events” (para 56), see paragraphs 2-4 of page 4 in Remarks filed on August 27, 2025. However, there is no disclosure to “transmitting” including “sensor signal” as “event trigger” to “the server”. Further, there is no disclosure to perform claimed step “analyzing” and “if so, …”, and application specification broadly and merely disclosed “A feature extraction module 412 (also referred to herein as feature extractor 412) can extract one or more salient features from the audio signal. For instance, the feature extractor 412 for voice signals, may extract linear prediction coefficients, mel cepstral coefficients, par cor coefficients, Fourier series coefficients (para 63)” with no disclosure of “determine if the acoustic signal contains a voice signal, if so ….”. With respect to the amendment “mobile device that can be worn by a user” which is believed to be supported by the application disclosure “people generally carry their mobile devices around with them (para 3)” and thus, the specification objection associated with this issue has been withdrawn with the claim amendment above. Note: wearable device is different from portable/mobile device, the former has a feature of hand-free characteristic while the latter is not hand-free device unless specifically assembled, see https://www.google.com/search?q=communicaiton+device%2C+difference+between+portable+vs.+wearable&safe=active&sca_esv=2d67cf3da2c4fbe1&ei=tGRQabWoBPCu5NoPiYq38A4&ved=0ahUKEwi10vKk796RAxVwF1kFHQnFDe4Q4dUDCBE&uact=5&oq=communicaiton+device%2C+difference+between+portable+vs.+wearable&gs_lp=Egxnd3Mtd2l6LXNlcnAiPmNvbW11bmljYWl0b24gZGV2aWNlLCBkaWZmZXJlbmNlIGJldHdlZW4gcG9ydGFibGUgdnMuIHdlYXJhYmxlSJqGAVDXHFiBgQFwBHgBkAEAmAFXoAHHHaoBAjU5uAEDyAEA-AEBmAIVoAL_CcICChAAGEcY1gQYsAPCAgUQABjvBcICCBAAGIkFGKIEwgIIEAAYgAQYogTCAgoQIRgKGKABGMMEmAMAiAYBkAYIkgcCMjGgB5PnAbIHAjE4uAf2CcIHBjAuMTkuMsgHLYAIA Q&sclient=gws-wiz-serp. Appropriate correction is required. Domestic Priority Benefit The Applicant, in Application Data Sheet filed on September 16, 2019, has indicated a claim benefit, under 35 U.S.C. 119(e), 120, 121, or 365(c), from the previously filed patent applications including provisional application 61/096,128 filed on September 11, 2008, the patent application 12/555,570 filed on September 8, 2009 and now U. S. Patent No. 8488799 B2, and the patent application 13/917,079 filed on June 13, 2013 and now U.S. Patent No. 10419863 B2. However, a closed examination found that the claimed features “transmitting (the metadata), sensor signal and (the acoustic signal) … to the server” as recited in claims 1, 11 and wherein the “sensor signal” has been interpreted as a signal to “trigger an event” as discussed in the specification objection above, and the claimed feature “analyzing the acoustic signal to determine if the acoustic signal contains a voice signal and if so then extracting features from the voice signal” as recited in claims 1, 11, and as discussed in the specification objection as set forth above, there is no disclosure to be found in the application specification and also in the parent application specifications. Therefore, the features “transmitting (the metadata), sensor signal and (the acoustic signal) … to the server” and “analyzing the acoustic signal to determine if the acoustic signal contains a voice signal and if so then extracting features from the voice signal” as recited in claims 1, 11 have no domestic benefit from the provisional patent application and the parent patent applications above. Appropriate reply as to above is required. Claim Objections Claims 2-10 are objected to because of the following informalities: Claim 1 recited “the server” which should be --the remote server --. Claims 2-10 are objected due to the dependencies to claim 1. Claims 2-3 are further objected for the at least similar reason as described in claim 1 above because claims 2-3 recited the similar deficient feature as recited in claim 1. For example, claims 2-3 recited “the server” that should be referred to “a remote server” as recited in parent claim 1. Claim 11 is objected for the at least similar reason as described in claim 1 above because claim 11 recited the similar deficient feature as recited in claim 1. For example, claim 11 also recited “the server” that should be referred back to “a remote server” as recited in claim 11. Claims 12-20 are objected due to the dependencies to claim 11. Claims 12-13 are further objected for the at least similar reason as described in claim 11 above because claims 12-13 recited the similar deficient feature as recited in claim 11. For example, claims 12-13 recited “the server” that should be referred to “a remote server” as recited in parent claim 11. Claim 2 recited “The device of claim 1” which should be -- The mobile device of claim 1-. Claim 8 is objected due to the dependency to claim 2. Claims 3-10 are objected for the at least similar reason as described in claim 2 above because claims 3-10 recited the similar deficient feature as recited in claim 2 above. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112 (pre-AIA ), first paragraph: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112 (pre-AIA ), first paragraph as failing to comply with the written description requirement. The claims 1-20 contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor(s) or joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recited “transmitting the metadata, the sensor signal and the acoustic signal to the server via …” and “analyzing the acoustic signal to determine if the acoustic signal contains a voice signal and if so then extracting features from the voice signal” in claims 1, 11. However, the original disclosure, including original claims and drawings, has nowhere to disclose a sufficiently definite structure and written description in sufficient details for performing the claimed functions above, see the discussion in specification objection above. Claims 2-10 are rejected due to the dependencies to claim 1. Claim 11 is rejected for the at least similar reasons described in claim 1 above because claim 11 recited similar deficient features as recited in claim 1. Claims 12-20 are rejected due to the dependencies to claim 11. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 1-4, 6-14, 16-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over reference Shalon et al (US 20060064037 A1, hereinafter Shalon) and in view of reference Schuler et al (US 20080159547 A1, hereinafter Schuler). Claim 1: Shalon teaches a wearable device (title and abstract, ln 1-9, a head mounted system including processing unit 14, sensor unit 24, feedback unit 24, etc., in fig. 1a-d or wristwatch as the wearable device, para 22), comprising: a microphone (one of one or more microphones positioned in or around the ear area on the skull, neck, throat, chest, back or abdomen regions, para 117); a sensor (other one of the one or more microphones above or selected from a group consisting of a heart rate sensor, an accelerometer, a skin conductance sensor, a muscle tone sensor, a blood sugar level sense, a bit sensor, and a stomach contraction sensor, para 18); a processor (processor included in the processing unit 14, para 235 and with software applications for processing signals received from sensor unit, para 235) that executes the instructions to perform operations, the operations comprising: receiving an acoustic signal from the microphone configured to measure an ambient environment (ambient sound sensor for picking up ambient noise or the noise of the speaker, para 121); analyzing the acoustic signal to determine if the acoustic signal contains a voice signal (voice recognition system, para 343, or the microphone signals are analyzed in order to determine a nature of the bolus swallowed, etc. para 117, or analysis frame of the acoustic signal formed by filtering noise, normalizing the energy level, segments the sampled sound for spectral signature analysis, para 240, or the user’s speech is recognized during each feeding event, para 219) and if so then extracting features from the voice signal (the user’s voice or something said to the user are extracted based on the voice recognition, para 343, or swallowing event is extracted or detected from periodic cessation of breathing, chewing, or other eating sounds and patterns, para 119 or features are extracted from the spectral signature analysis to identify waveforms with eating microstructure events or signatures, or classifying the extracted components to be against patterns and segmenting into chews, sips, and speech as claimed features, para 240 or the words are recognized and used for calculating the number of calories for word “apple”, etc., para 219); receiving a sensor signal from the sensor (e.g., sensor signal is processed to detect eating event including bit, chew, swallow, bit, as indicated in fig. 3, and classified and identified individual section of eating event through a HHMs and patterns, para 256 and through monitoring patterns over time, para 147 or and selected from a group consisting of a heart rate sensor, an accelerometer, a skin conductance sensor, a muscle tone sensor, a blood sugar level sensor, a bit sensor, and a stomach contraction sensor, para 18 or other one of one or more microphones above); analyzing a sensor signal to detect a trigger event (eating event is detected to trigger the sound sensor transmits information, and if not transmitting sounds, microphone and electronics are in standby mode to conserve power, para 122); opening a communication channel with a remote server if the trigger event is detected (sound sensor transmit information only when an eating event is detected, para 122 and the communications between sensor unit 12 and processing unit 14 is through a WIFI or Bluetooth, para 232 and thus, a channel for WiFi and Bluetooth is inherently established for transmitting the sensor data from the sensor unit 12 to the processing unit 14 or communicated to a remote server 20 through a network 22 in fig. 1), generating metadata including a time stamp (through spectral analysis and raw signal are utilized to extract features and categorize them with a time stamp and a fitness score, para 261); transmitting the sensor signal and the acoustic signal to a server (transmitting data in batch mode or in real time to a server or other data sharing mechanisms, para 191, the collected data including raw recordings of sensor data as sensor signal and the acoustic signal, processed sensor data, activity-related signatures, or high-level behavioral data, para 153, including sensed ingestion activity to the server, para 192). However, Shalon does not explicitly teach transmitting metadata containing the disclosed time stamp and does not explicitly teach receiving, from the server an indication that the metadata, the sensor signal and acoustic signal has been received. Schuler teaches an analogous field of endeavor by disclosing a wearable device (title and abstract, ln 1-16, a system including a wireless headset 102 in fig. 1) and wherein metadata is disclosed (information included in SPL record or SPL exposure record, and the information includes a communication device ID, an SPL measurement) to include a time stamp (a corresponding date and time of measurement, para 43, as an SPL exposure record 230, para 50) and the metadata transmitted to a server via a communication channel (the SPL exposure record 230 containing the SPL measurement information, corresponding data and time of the measurement, to an SPL tracking server 112, for remote monitoring or historical collection, para 56) and receiving, from the server an indication that the metadata, the sensor signal and acoustic signal (SPL exposure record 230 above) has been received (the SPL exposure director 422 at the server of fig. 4 advises the user to take corrective action, e.g., using full time use of hearing protection, seeking the user of hearing protectors better suited to the environment etc., para 62, i.e., indication of receiving the SPL exposure record 230 inherently) for benefits of improving the operability of sound measurement (by providing cost-saving and common hardware, para 4, and such as wireless telephone or PDA, and headset, para 23 and in a dynamic environment, para 4). Therefore, it would have been obvious for one having ordinary skill in the art at the time the invention was made to have applied transmitting the metadata containing time stamp and receiving, from the server the indication that the metadata, the sensor signal and acoustic signal has been received, as taught by Schuler, to transmitting the sensor signal and the acoustic signal to the server in wearable device, as taught by Shalon, for the benefits discussed above. Claim 11 has been analyzed and rejected according to claim 1 above Claim 2: the combination of Shalon and Schuler further teaches, according to claim 1 above, wherein the operations further comprise transmitting the metadata the sensor signal and the acoustic signal as a package to the server over the communication channel (Shalon, transmitting collect data to the server in batch mode, compared to real time mode, para 191 and Schuler, transmitting the SPL exposure data to the server, and the discussion in claim 1 above). Claim 3: the combination of Shalon and Schuler further teaches, according to claim 1 above, wherein the operations further comprise transmitting the metadata, the sensor signal, and the acoustic signal separately to the server via separate communications over the communication channel (Shalon, transmitting collect data to the server in real time mode, para 191 and Schuler, the discussion in claim 1 above). Claim 4: the combination of Shalon and Schuler further teaches, according to claim 1 above, analyzing the acoustic signal against stored models (Shalon, previously stored scenarios, para 266) to detect a sonic signature (Shalon, detecting scenario the user is in breakfast, lunch, snacking, sipping a mid afternoon hot coffee, etc., para 266, or acoustic energy signatures of the user’s breathing or heartbeat, is analyzed to extract user’s breathing patterns as sonic signature, etc., para 327 or chewing sound authenticator for accessing the system and Schuler, sound exposure level up to a threshold and the sound exposure level as sonic signature, para 50). Claim 6: the combination of Shalon and Schuler further teaches, according to claim 1 above, buffering the acoustic signal upon detection of the trigger event (Shalon, eating event is detected to trigger the sound sensor transmits information and discussed in claim 1 above, and the collected sound signal is AD converted into digital format for processing by DSP 30 in fig. 2, and thus, buffering such as DSP register to buffer digitized acoustic signal is inherency for DSP to process, and Schuler, SPL exposure record 230 is stored for future retrieval in the data memory 220, para 50, and the SPL exposure record containing a user sound exposure level measurement, para 56, 60). Claim 7: the combination of Shalon and Schuler further teaches, according to claim 1 above, wherein the operations further comprise: receiving location data (Shalon, receiving precise location of user from GPS, para 335 and Schuler, GPS location information of the communication device, para 63). Claim 8: the combination of Shalon and Schuler further teaches, according to claim 2 above, wherein location data is included in the package (Shalon, receiving precise location of user from GPS, para 335 and Schuler, the SPL exposure record 230 from each communication device also include a GPS location of the communication device, para 63 and transmitted to the SPL tracking server 112 and to be stored, para 50, 56, and the SPL exposure record 230 also contains the SPL measurement information, corresponding time and date, user identifier, and communication device identifier, para 56, i.e., package to be transmitted with the location information inherently). Claim 9: the combination of Shalon and Schuler further teaches, according to claim 1 above, wherein the sensor measures an acceleration (Shalon, chewing can be detected using one or more accelerometers or vibration detectors placed on the skull, para 141 or limb location in space is monitored by fitted accelerometers, para 134). Claim 10: the combination of Shalon and Schuler further teaches, according to claim 1 above, wherein the operations further comprise presenting a threat level associated with the trigger event (Shalon, eating event is detected to trigger the sound sensor transmits information, etc., para 122 and the discussion in claim 1 above, and eating event or activity is detected by counting the bites, chews and/or swallows in each eating sequence and trigger a feedback if preset eating rates or eating durations as threat level are exceeded, para 188 or a voice activated function FOX 114 turns on the circuitry above a predetermined sound threshold, para 271 and when certain thresholds are met, the system can be triggered to process the vital signals and emit alarms, para 337 and Schuler, alerting user is generated if measured SPL exceeds the predetermined threshold, para 6). Claim 12 has been analyzed and rejected according to claim 11, 2 above. Claim 13 has been analyzed and rejected according to claims 11, 3 above. Claim 14 has been analyzed and rejected according to claims 11, 4 above. Claim 16 has been analyzed and rejected according to claims 11, 6 above. Claim 17 has been analyzed and rejected according to claims 11, 7 above. Claim 18 has been analyzed and rejected according to claims 12, 8 above. Claim 19 has been analyzed and rejected according to claims 11, 9 above. Claim 20 has been analyzed and rejected according to claims 11, 10 above. Claims 5, 15 are rejected under 35 U.S.C. 103(a) as being unpatentable over reference Shalon (above) and in view of references Schuler (above) and Radomski (US 6507790 B1). Claim 5: the combination of Shalon and Schuler teaches all the elements of claim 5, according to claim 4, including analyzing the sensor signal to detect a trigger event (the discussion in claim 4 above), except explicitly teaching wherein analyzing the sonic signature to detect a trigger event. Radomski teaches an analogous field of endeavor by disclosing a device (title and abstract, ln 1- 25 and an acoustic monitor in fig. 1) and wherein sonic signature is disclosed (five types of product defects are detected, col 5, ln 4-8) and to be analyzed to detect a trigger event (external triggering of the oscilloscope is facilitated, e.g., warning alarm is activated, col 13, ln 21-25, e.g., alarm triggered by comparing the maximum and the minimum power spectrum 68A, 68B for upper or maximum and 70A, 70B for low or minimum and stored in the memory to the real-time power spectrum 66 and 66A in fig. 3, and if comparing indicated that real-time power spectrum is higher than 68A or 68B or lower than 70A or 70B at different frequency bands in fig. 3, col 12, ln 66-67 and col 13, ln 1-25) for benefits of improving the sound monitoring performance (by increasing accuracy, e.g., providing two level conditions, warning and dangerousness, col 4, ln 14-18, and by capability of testing five types of sound based on the five acoustic signatures, col 4, ln 44-52). Therefore, it would have been obvious for one having ordinary skill in the art at the time the invention was made to have applied analyzing the sonic signature to detect the trigger event, as taught by Radomski, to the trigger event used in the device, as taught by the combination of Shalon and Schuler, for the benefits discussed above. Claim 15 has been analyzed and rejected according to claims 14, 5 above. The prior art (US 20020165718 A1 by Graumann et al.) made of record and not relied upon is considered pertinent to applicant's disclosure because the disclosure by Graumann above is about voice activity detect VAD and used for identifying and detecting for noise feature and segment features (para 17), which is part of the disclosures disclosed by the instant application. Response to Arguments Applicant's arguments filed on August 27, 2025 have been fully considered and but are moot in view of the new ground(s) of rejection necessitated by the applicant amendment. Although a new ground of rejection has been used to address additional limitations that have been added to claims 1, 4-5, 11, 14-15, a response is considered necessary for several of applicant’s arguments since references Shalon and Schuler will continue to be used to meet several claimed limitations. With respect to the prior art rejection of independent claim 1, similar to claim 11, under 35 USC §103(a), as set forth in the Office Action, applicant challenged the claim amendment “analyzing the acoustic signal to determine if the acoustic signal contains a voice signal and if so then extracting features from the voice signal” and argued: “Shalon, Schuler and Radomski, all fail to show, suggest or teach feature extraction from a vocal signal”, as asserted in paragraphs 2-3 of page 5 in Remarks Continuation “V. Drawing” filed on August 27, 2025. In response to the argument cited above, the Office respectfully disagrees because claim failed to recite what “features” is and how “analyzing the acoustic signal” is performed, Shalon clearly teaches a voice recognition system (para 343) by which analyzing is performed to extract features (user’s speech or something said to the user extracted based on the voice recognition, para 343, etc. as discussed in office action above), but applicant is in silence and thus, the argument above is moot. On the bases of above analyses and evidences from the prior art, the prior art rejection of independent claim 1 under 35 USC §103(a), as set forth in the Office Action, is maintained. For the at least similar reasons discussed above, the prior art rejection of other independent claim 11 and dependent claims 2-10, 12-20 is also maintained. In the response to this office action, the Office respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Office in prosecuting this application. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached on Monday-Friday 6:30amp-4:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached on 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LESHUI ZHANG/ Primary Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
May 31, 2025
Non-Final Rejection — §103, §112, §DP
Aug 27, 2025
Response Filed
Dec 27, 2025
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585677
AUTOMATED GENERATION OF IMPROVED LIST-TYPE ANSWERS IN QUESTION ANSWERING SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12572757
VIDEO PROCESSING METHOD, VIDEO PROCESSING APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12567423
SYSTEM AND METHODS FOR UPSAMPLING OF DECOMPRESSED SPEECH DATA USING A NEURAL NETWORK
2y 5m to grant Granted Mar 03, 2026
Patent 12567424
METHOD AND DEVICE FOR MULTI-CHANNEL COMFORT NOISE INJECTION IN A DECODED SOUND SIGNAL
2y 5m to grant Granted Mar 03, 2026
Patent 12561354
SYSTEMS AND METHODS FOR ITEM-SPECIFIC KEYWORD RECOMMENDATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+36.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 928 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month