Prosecution Insights
Last updated: April 19, 2026
Application No. 18/606,717

SOUND EFFECT PROCESSING METHOD, DEVICE, AND STORAGE MEDIUM DEVICE

Non-Final OA §101§102§103
Filed
Mar 15, 2024
Examiner
NGUYEN, QUYNH H
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Lenovo (Beijing) Limited
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
941 granted / 1078 resolved
+25.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
29 currently pending
Career history
1107
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
42.7%
+2.7% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1078 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Objections 1. Claims 1-8 are objected to because of the following informalities: Claims 1-8 are not falling within one of the four statutory categories of invention. Supreme Court precedent and recent Federal Circuit decisions indicate that a statutory "process" under 35 U.S.C. 101 must (1) be tied to another statutory category (such as a particular apparatus), or (2) transform underlying subject matter (such as an article or material) to a different state or thing. While the instant claim(s) recite a series of steps or actsto be performed, the claim(s) neither transform underlying subject matter norpositively tie to another statutory category that accomplishes the claimed methodsteps, and therefore do not qualify as a statutory process. Claim 1 defines a "method" and it appears that applicant is defining a series of steps of “obtaining”, “determining” and “outputting” that not tied to any apparatus. Appropriate correction is required. Failure to make appropriate correction(s) would lead to 35 U.S.C. 101 rejection(s). Claim Rejections - 35 USC § 101 2. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 17 recites “A storage medium storing a computer program”. However, the claim is not limited to nontransitory embodiments, and the specification does not provide a definition limiting the meaning of this term to only nontransitory embodiments (see [0075] of the specification which states that “The program implementing the present disclosure can be stored in a computer-readable medium or can have one or a plurality of signal forms. The signals can be downloaded from the Internet website, provided at the carrier signal, or provided in any other forms”. The claim therefore can be reasonably interpreted as encompassing transitory signal embodiments, which are nonstatutory (In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007)). If the specification includes written description support, this rejection can be overcome by including the term “non-transitory” in the claim (see USPTO Official Gazette notice 1351 OG 212.). For example, a non-transitory tangible computer storage medium storing a computer program. Dependent claims 18-20 inherit the same defects. Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1-5, 8-13, 16-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ding et al. (CN 107193386 A). As to claim 1, Ding teaches a sound effect processing method comprising: obtaining an information of a real object (under specific implementation method, 5th paragraph – obtains the scene information of the meeting room such as wall, ceiling), the information including an association relationship between the real object and a virtual scene (under specific implementation method, 5th paragraph – obtains the scene information of the meeting room such as wall, ceiling also obtains virtual scene information such as desk, seat as a virtual environment objects), and an acoustic property of the real object (under summary of the invention, 10th paragraph - obtaining the environment sound absorption coefficient of the object and at least one of the distance information); determining a sound output effect of the virtual scene based on the information of the real object (under specific implementation method, 11th paragraph – the second audio signal obtained which is processed to obtain an audio signal processed on the basis of scene information and output, whereby realism of sound in the scene can be improved. For example, when the scene related to the electronic device is indoor, due to the closer distance relative to the environmental objects in the scene, and the attraction coefficient of most environmental objects is typically small, e.g., cement, metal, the sound indicated by the processed audio signal based on the scene information appears stronger than the original sound due to the simulated echo delay being small to be superimposed with the original sound. Whereas when the scene is a forest presented as a virtual scene, the sound indicated by the processed resulting second audio signal appears as a sound having an echo due to the presence of environmental objects in the scene at a greater distance); and outputting sound of the virtual scene based on the sound output effect (under specific implementation method, 11th paragraph – the second audio signal is outputted. The second audio signal obtained which is processed to obtain an audio signal processed on the basis of scene information and output, whereby realism of sound in the scene can be improved. For example, when the scene related to the electronic device is indoor, due to the closer distance relative to the environmental objects in the scene, and the attraction coefficient of most environmental objects is typically small, e.g., cement, metal, the sound indicated by the processed audio signal based on the scene information appears stronger than the original sound due to the simulated echo delay being small to be superimposed with the original sound. Whereas when the scene is a forest presented as a virtual scene, the sound indicated by the processed resulting second audio signal appears as a sound having an echo due to the presence of environmental objects in the scene at a greater distance). As to claims 2, 10, and 18, Ding teaches the sound effect of claim 1, the device according claim 9, and the storage medium according to claim 17, further include obtain image information of the real object in a real scene (summary of the invention, 3rd – 7th paragraphs and under specific implementation methods, 5th paragraph – collecting real scene where the electronic device through the image sensor to obtain scene information); determine a target category included in the information of the real object based on the image information (under specific implementation method, 5th paragraph – the current scene information of the conference room in which the electronic device is located, at least one real environmental object in the conference room is detected such as wall, ceiling); determining a sound output effect of the virtual scene based on the information of the real object (under specific implementation method, 11th paragraph – the second audio signal obtained which is processed to obtain an audio signal processed on the basis of scene information and output, whereby realism of sound in the scene can be improved. For example, when the scene related to the electronic device is indoor, due to the closer distance relative to the environmental objects in the scene, and the attraction coefficient of most environmental objects is typically small, e.g., cement, metal, the sound indicated by the processed audio signal based on the scene information appears stronger than the original sound due to the simulated echo delay being small to be superimposed with the original sound. Whereas when the scene is a forest presented as a virtual scene, the sound indicated by the processed resulting second audio signal appears as a sound having an echo due to the presence of environmental objects in the scene at a greater distance) include: determine the acoustic property corresponding to the target category according to the determined target category (under summary of the invention, 10th paragraph - obtaining the environment sound absorption coefficient of the object and at least one of the distance information). As to claims 3, 11, and 19, Ding teaches the sound effect of claim 2, the device according claim 10, and the storage medium according to claim 18, further includes determine material information of the real object according to the determined target category; and determine the acoustic property of the real object based on the material information, wherein the acoustic property includes at least one or more of a reflection coefficient and an absorption coefficient of the real object to the sound (under specific implementation method, 8th paragraph – detecting the wall of the conference and identifying said wall is cement material; obtaining the cement material from the database by searching the sound absorption coefficient then, based on at least one of the absorption coefficient and at least one of the distance information processing of the first audio signal, specifically, for example, can be obtained based on at least one of absorption coefficient and distance information construction for processing a first audio signal of the audio processing function, generated by the original first audio signal and the audio processing function performing a convolution calculation to obtain a second processed audio signal, wherein the audio processing function may be attribute information of each of the environment object is constructed, construction of audio processing function under this condition, the first audio signal corresponding to each environment object can respectively convolution and then overlapped to obtain the calculation result the second audio signal and also can be adjusted according to the relative distance to the user or electronic device environment object corresponding to the different audio processing function giving different weights, that is, nearer the user environment object influence the sound is bigger to be given a greater weight). As to claims 4, 12, and 20, Ding teaches the sound effect of claim 1, the device according claim 9, and the storage medium according to claim 17, further includes construct a scene model including a real scene and the virtual scene; obtain a first characteristic parameter of a sound source in the scene model, wherein the first characteristic parameter at least includes a position parameter (under specific implementation method, 8th paragraph – the current scene is an electronic device where the real meeting room, there is a wall, ceiling and other object and real environment by related art builds a virtual office, a virtual environment object seat such that, in the meeting. In this case, it can detect the real environment object in the current meeting: wall, ceiling, desk and chair for also can detect virtual construction in the current conference, and obtains the attribute information of said detected object of real and virtual environment, and based on at least one of the attribute information processing of the first audio signal; and the scene is virtual scene condition, detecting the at least one of the virtual link object in the virtual scene, obtaining the environment sound absorption coefficient of the object, and distance information relative to the environment object, and based on at least one of the absorption coefficient and at least one of the distance information processing of the first audio signal. for the virtual environment object detected, can be the attribute information of the virtual environment object was created is stored in the database, and by searching the database for obtaining attribute information corresponding to the virtual object); and generate the sound output effect of the virtual scene based on the first characteristic parameter of the sound source (under specific implementation method, 11th paragraph –outputs the second audio signal. the second audio signal is obtained after the processing of the audio signal, processing the original audio signal generated by said step, obtained based on the processed scene information of the audio signal and output, so as to improve the authenticity of the sound scene. For example, when the electronic devices associated with the scene is indoor, because the distance of the environment object relative to the scene is closer, and the absorbing coefficient of most environmental objects are generally smaller (such as cement, metal), based on scene information after processing of the audio signal indicating sound performance is stronger than the original sound, this is because very small echo delay of the simulation so that the overlapped with the original, and the scene is presented as virtual scene of forest, because far environment object exists in the scene so that the second audio signal is obtained by processing the indication sound appear to have echo sound. Therefore, it can realize more actually simulate the sound in different scenes). As to claims 5 and 13, Ding teaches the sound effect of claim 4, the device according claim 12 further includes: determining a second characteristic parameter of the real object in the real scene based on the scene model wherein the second characteristic parameter includes at least a size parameter; and generating the sound output effect of the virtual scene according to the first characteristic parameter of the sound source and the second characteristic parameter of the real object in the real scene (under specific implementation method, 8th paragraph - the current scene is an electronic device where the real meeting room, there is a wall, ceiling and other object and real environment by related art builds a virtual office, a virtual environment object seat such that, in the meeting. In this case, it can detect the real environment object in the current meeting: wall, ceiling, desk and chair for also can detect virtual construction in the current conference, and obtains the attribute information of said detected object of real and virtual environment, and based on at least one of the attribute information processing of the first audio signal. and the scene is virtual scene condition, detecting the at least one of the virtual link object in the virtual scene, obtaining the environment sound absorption coefficient of the object, and distance information relative to the environment object, and based on at least one of the absorption coefficient and at least one of the distance information processing of the first audio signal. for the virtual environment object detected, can be the attribute information of the virtual environment object was created is stored in the database, and by searching the database for obtaining attribute information corresponding to the virtual object); and generate the sound output effect of the virtual scene based on the first characteristic parameter of the sound source (under specific implementation method, 11th paragraph –outputs the second audio signal. the second audio signal is obtained after the processing of the audio signal, processing the original audio signal generated by said step, obtained based on the processed scene information of the audio signal and output, so as to improve the authenticity of the sound scene. For example, when the electronic devices associated with the scene is indoor, because the distance of the environment object relative to the scene is closer, and the absorbing coefficient of most environmental objects are generally smaller (such as cement, metal), based on scene information after processing of the audio signal indicating sound performance is stronger than the original sound, this is because very small echo delay of the simulation so that the overlapped with the original. and the scene is presented as virtual scene of forest (size of environment), because far environment object exists in the scene so that the second audio signal is obtained by processing the indication sound appear to have echo sound. Therefore, it can realize more actually simulate the sound in different scenes; and 11th paragraphs – outputs the second audio signal. the second audio signal is obtained after the processing of the audio signal, processing the original audio signal generated by said step, obtained based on the processed scene information of the audio signal and output, so as to improve the authenticity of the sound scene. For example, when the electronic devices associated with the scene is indoor, because the distance of the environment object relative to the scene is closer, and the absorbing coefficient of most environmental objects are generally smaller (such as cement, metal), based on scene information after processing of the audio signal indicating sound performance is stronger than the original sound, this is because very small echo delay of the simulation so that the overlapped with the original. and the scene is presented as virtual scene of forest, because far environment object exists in the scene so that the second audio signal is obtained by processing the indication sound appear to have echo sound. Therefore, it can realize more actually simulate the sound in different scenes). As to claims 8 and 16, Ding teaches the sound effect of claim 3, the device according claim 11 further includes determine coverage areas of various materials for the real object based on image information of the real object in response to the material information of the real object including different materials; and determine the acoustic property of the real object based on the coverage areas (under specific implementation method,11th paragraphs – outputs the second audio signal. the second audio signal is obtained after the processing of the audio signal, processing the original audio signal generated by said step, obtained based on the processed scene information of the audio signal and output, so as to improve the authenticity of the sound scene. For example, when the electronic devices associated with the scene is indoor, because the distance of the environment object relative to the scene is closer, and the absorbing coefficient of most environmental objects are generally smaller (such as cement, metal), based on scene information after processing of the audio signal indicating sound performance is stronger than the original sound, this is because very small echo delay of the simulation so that the overlapped with the original; and the scene is presented as virtual scene of forest, because far environment object exists in the scene so that the second audio signal is obtained by processing the indication sound appear to have echo sound. Therefore, it can realize more actually simulate the sound in different scenes). Claims 9 and 17 are rejected for same reasons discussed above with respect to claim 1. Furthermore, Ding teaches one or more memories storing a computer instructions when executed by the one or more processors; a storage medium storing a computer program when executed by one or more processors (Fig. 2, memory 202, processor 203 and related texts; 2nd paragraph from the end of specific implementation method before claims). Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ding et al. (CN 107193386 A) in view of Hu et al. (WO 2021044203 A1). As to claims 6 and 14, Ding teaches the sound effect processing method according to claim 2 and the device according to claim 10, further includes determining that the image information of the real object (summary of the invention, 3rd – 7th paragraphs and under specific implementation methods, 5th paragraph – collecting real scene where the electronic device through the image sensor to obtain scene information) corresponds to at least two categories (under specific implementation method, 5th paragraph – the current scene information of the conference room in which the electronic device is located, at least one real environmental object in the conference room is detected such as wall, ceiling). Ding does not explicitly discuss determining probabilities corresponding to categories using a category corresponding to a highest probability of the probabilities as the target category of the real object. Hu teaches the trained feature extraction network and classification network can realize the recognition of stacked objects in real scenes such as blur, occlusion, and irregular placement, which significantly improves the recognition accuracy; and The category determination module 43 is configured to determine the category of each object in the sequence according to the regional characteristics of each block area. In a possible implementation manner, the category determination module includes: a probability determination sub-module, configured to determine the probability that the first block area belongs to each set object category according to the area characteristics of the first block area; Wherein, the first block area is any one of the block areas; the category determination sub-module is used to determine the object category with the highest probability of the object category to which the area feature of the first block area belongs as the object category (under description). It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Hu into the teachings of Ding for the purpose of recognizing a sequence in an image after determining the category of each object in the sequence, according to the difference between the category and the value represented by the category. The correspondence relationship between determines the total value represented by the sequence. 7. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ding et al. (CN 107193386 A) in view of He et al. (CN 101136197 A). As to claim 7, Ding teaches the sound effect processing method according to claim 2, wherein: determining the acoustic property corresponding to the target category according to the determined target category (under specific implementation method, 5th paragraph – the current scene information of the conference room in which the electronic device is located, at least one real environmental object in the conference room is detected such as wall, ceiling; under summary of the invention, 10th paragraph - obtaining the environment sound absorption coefficient of the object and at least one of the distance information) includes in response to the target category including a plurality of sub-categories (under specific implementation method, 8th and 17th paragraphs – cement, metal); and determining the sound output effect of the virtual scene according to the information of the real object includes: determining sub-sound output effects corresponding to the acoustic properties based on the acoustic properties corresponding to the sub-categories (under specific implementation method, 8th paragraph - detecting the wall of the conference and identifying said wall is cement material that has sound coefficient; obtaining the cement material from the database by searching the sound absorption coefficient then, based on at least one of the absorption coefficient and at least one of the distance information processing of the first audio signal, specifically, for example, can be obtained based on at least one of absorption coefficient and distance information construction for processing a first audio signal of the audio processing function, generated by the original first audio signal and the audio processing function performing a convolution calculation to obtain a second processed audio signal. wherein the audio processing function may be attribute information of each of the environment object is constructed, construction of audio processing function under this condition, the first audio signal corresponding to each environment object can respectively convolution and then overlapped to obtain the calculation result the second audio signal and also can be adjusted according to the relative distance to the user or electronic device environment object corresponding to the different audio processing function giving different weights, that is, nearer the user environment object influence the sound is bigger to be given a greater weight; 11th paragraph – the second audio signal is outputted. The second audio signal obtained which is processed to obtain an audio signal processed on the basis of scene information and output, whereby realism of sound in the scene can be improved. For example, when the scene related to the electronic device is indoor, due to the closer distance relative to the environmental objects in the scene, and the attraction coefficient of most environmental objects is typically small, e.g., cement, metal, the sound indicated by the processed audio signal based on the scene information appears stronger than the original sound due to the simulated echo delay being small to be superimposed with the original sound. Whereas when the scene is a forest presented as a virtual scene, the sound indicated by the processed resulting second audio signal appears as a sound having an echo due to the presence of environmental objects in the scene at a greater distance). Ding does not explicitly discuss mixing the sub-sound output effects based on a predetermined ratio to obtain the sound output effect of the virtual scene. He teaches the output part uses the two-channel virtual surround sound technology, such that output effect generating stereo sense of space, so as to implement the real sound field characteristic, generate various special reverberation sound, and that sound natural, full, smooth, and has three-dimensional sound field effect of the sense of space, furthermore, also set a plurality of regulator to complete comprises adjusting the delay spectrum, the mixing processing of the sound spectrum, attenuation characteristics of sound, frequency-phase characteristics, and ratio of reverberant to direct sound etc. of the adjusting. so as to realize sound effects under different occasions of all kinds of musical instruments and human voice, but also can simulate the reverberation effect in various environments (last paragraph before claims). It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of He into the teachings of Ding for the purpose of generating different reverberation effects and timbre change effect influence, and weakened by the full-pass filter module into low-frequency oscillator to modulate the delay line unit, which can effectively eliminate sound coloration phenomenon bad effect of sound effect. 8. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Ding et al. (CN 107193386 A) and Hu et al. (WO 2021044203 A1) in view of He et al. (CN 101136197 A). Claim 15 is rejected for the same reasons discussed above with respect to claim 7. Conclusion 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUYNH H NGUYEN whose telephone number is (571)272-7489. The examiner can normally be reached Monday-Thursday 7:30AM-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUYNH H NGUYEN/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Mar 15, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591740
METHODS AND SYSTEMS FOR GENERATING TEXTUAL FEATURES
2y 5m to grant Granted Mar 31, 2026
Patent 12567409
RESTRICTING THIRD PARTY APPLICATION ACCESS TO AUDIO DATA CONTENT
2y 5m to grant Granted Mar 03, 2026
Patent 12566920
System and Method to Generate and Enhance Dynamic Interactive Applications from Natural Language Using Artificial Intelligence
2y 5m to grant Granted Mar 03, 2026
Patent 12563141
SYSTEM AND METHOD OF CONNECTING A CALLER TO A RECIPIENT BASED ON THE RECIPIENT'S STATUS AND RELATIONSHIP TO THE CALLER
2y 5m to grant Granted Feb 24, 2026
Patent 12554761
DATA SOURCE CURATION FOR LARGE LANGUAGE MODEL (LLM) PROMPTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+17.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 1078 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month