Prosecution Insights
Last updated: April 19, 2026
Application No. 18/621,520

Method For Generating A Sound Effect

Final Rejection §101§102§112
Filed
Mar 29, 2024
Examiner
IANNUZZI, PETER J
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Sony Interactive Entertainment Europe Limited
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
82%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
343 granted / 509 resolved
-2.6% vs TC avg
Moderate +15% lift
Without
With
+14.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
39 currently pending
Career history
548
Total Applications
across all art units

Statute-Specific Performance

§101
16.2%
-23.8% vs TC avg
§103
30.8%
-9.2% vs TC avg
§102
27.6%
-12.4% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 509 resolved cases

Office Action

§101 §102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed January 16, 2026 have been fully considered but they are not persuasive. Applicant asserts that the new claim limitations overcome all rejections without providing any particular analysis regarding the multiple statutory categories cited in the prior office action. Applicant’s arguments are unpersuasive and the claims remain rejected as noted below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 and 17-35 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 18, 19, 21-24, 28, 30, 31 and 33-35 recite the phrase “characteristics that are common to the identified multiple sounds” however the metes and bounds of this phrase are not clearly defined and one having ordinary skill in the art would not be apprised of the scope of the claimed invention. Looking to the specification to construe the metes and bounds of the phrase “characteristics that are common to the identified multiple sounds”, paragraphs 33-35 recite that the “common effect characteristic may literally be a musical instrument (i.e. a timbre corresponding to that instrument) that is present in all of the plurality of sounds, or may be any of the other characteristics mentioned above (optionally including metadata).” or “the common effect characteristic may be a more complex ‘aesthetic’ or ‘style’. As an example, the common effect characteristic could be a music genre such as ‘Pop’ or ‘Jazz’, which is characterized by a combination of factors such as choice of instruments, speeds, chord sequences and so on.” or “identifying a common effect characteristic of a plurality of sounds may in some embodiments require fuzzy logic, or may be too complex to express in terms of human-processable logic. For example, group 1020-2 in Fig. 1 does not have a single specific characteristic (schematically represented by instrument icons) which is present in all of the sounds 1010-5 to 1010-8 but does have a general common characteristic (schematically represented by different drum icons). In such embodiments, an effect characterizing model can be trained by machine learning to determine the common effect characteristic for a given plurality of sounds. The effect characterizing model may for example include one or more discrimination models, or a general classification model. For example, the effect characterizing model may comprise a convolutional neural network.” This definition does not provide for a clear limit as to what is to be considered a “characteristics that are common to the identified multiple sounds” because it appears to be an ascertainable quality in the sounds that includes subjective measures such as “genre” or concepts “too complex to express in terms of human-processable logic”. This definition precludes clear metes and bounds regarding the limits of the phrase and as such the claims are indefinite. The claims will be examined as best understood by Examiner. Claims 1, 21-24, 26-28 and 33-35 recite the phrase “base sound” however the metes and bounds of this phrase are not clearly defined because (1) “base sound” has been defined with a subjective quality (see paragraph 36 “The base sound is any sound which will desirably be adapted to have the common effect characteristic.” (emphasis added) and (2) “base sounds” is defined in terms of the phrase “common effect characteristic” which is itself an indefinite phrase. Claims 1, 17, 21, 22, 24, 28, 29 and 33-35, recite the phrase “desired sound aesthetic”, this phrase has unclear metes and bounds because it is a subjective quality without clear boundaries. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, and 17-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because these claims appear to be without a tangible form and are software “per se”, see MPEP§2106.03. Claims 28-35 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception. The claims will be analyzed with respect to the Subject Matter Eligibility Test at MPEP§2106. Subject Matter Eligibility – Step 1 (see MPEP§2106.03) Claims 28-35 recite one of the four statutory categories of subject matter. Subject Matter Eligibility – Step 2A Prong 1 (see MPEP§2106.04(a-c)) The claims recite abstract ideas in the following categories; Mental processes (MPEP§2106.04(a)(2)III). The abstract ideas have been noted in the claims below. Regarding claim 28, identifying multiple sounds that exhibit a desired sound aesthetic; determining one or more characteristics that are common to the identified multiple sounds; obtaining a base sound that does not have the one or more characteristics that are common to the identified multiple sounds; and generating a sound effect that exhibits the desired sound aesthetic based at least on the base sound and the one or more characteristics that are common to the identified multiple sounds (these steps are mental processes that can be performed in the human mind by an individual that is generating new sound effects, e.g. an individual can find a melody with a particular instrument and replace it with another instrument and produce the new melody). Regarding claim 29, identifying multiple sounds that exhibit a desired sound aesthetic comprises identifying the multiple sounds that exhibit a desired, respective sound aesthetic from a video game, wherein each sound of the multiple sounds is associated with at least one of different locations, different levels, or different characters of the video game (mental process of human judgement regarding the quality and nature of a sound in an environment). Regarding claim 30, determining one or more characteristics that are common to the identified multiple sounds comprises determining at least one of a frequency range, a pitch shift, a key change, a timbre, a note sequence, a music genre, or a chord progression present and in common in each identified sound of the multiple sounds (mental process of human judgement regarding the quality and nature of a sound). Regarding claim 31, determining the one or more characteristics that are common to the identified multiple sounds comprises applying a trained machine learning model to the identified multiple sounds to identify the one or more characteristics common to the identified multiple sounds (mental process of human judgement regarding the quality and nature of a sound). Regarding claim 33, obtaining a base sound that does not have the one or more characteristics that are common to the identified multiple sounds comprises obtaining the base sound that includes a desired aesthetic to be adapted to include the one or more characteristics that are common to the identified multiple sounds (mental process of human judgement regarding the quality and nature of a sound). Regarding claim 34, generating a sound effect that exhibits the desired sound aesthetic based at least on the base sound and the one or more characteristics that are common to the identified multiple sounds comprises applying the one or more characteristics that are common to the identified multiple sounds to the base sound such that the generated sound effect exhibits the desired sound aesthetic associated with the one or more characteristics (mental process in so far as the generating is the abstract imagining of the sound to be produced or the abstract composition in position. Examiner notes that a “generating” as recited is detached from any hardware production of sound). Regarding claim 35, this claim recites abstract ideas as noted above regarding claim 28. Subject Matter Eligibility – Step 2A Prong 2 (see MPEP§2106.04(d)) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements are generic computer hardware; insignificant extra solution activity such as collecting information, analyzing it, and displaying certain results of the collection and analysis to data; and the use of software to tailor information and provide it to the user on a generic computer. These additional elements individually and in combination provide for limitations that do not integrate the judicial exception into a practical application. These additional elements (1) add “insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)” (MPEP§2106.04(d)I) and (2) generally link “the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h).” (MPEP§2106.04(d)I). These additional elements individually and in combination are not limitations that provide for “improvement in the functioning of a computer, or an improvement to other technology or technical field, as discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a);” (MPEP§2106.04(d)I) apply or use the “judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, as discussed in MPEP § 2106.04(d)(2);” (MPEP§2106.04(d)I) implement the “judicial exception with, or using a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, as discussed in MPEP § 2106.05(b);” (MPEP§2106.04(d)I) effect “a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP § 2106.05(c);” (MPEP§2106.04(d)I) or apply or use “the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP § 2106.05(e).” (MPEP§2106.04(d)I). As such the claims as a whole do not integrate the judicial exception into a practical application. Subject Matter Eligibility – Step 2B (see MPEP§2106.05) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are well-understood, routine and conventional generic computer hardware and insignificant extra solution activity (see MPEP§2106.05). See U.S. 2024/0395028 at para. 330 for WURC nature of convolutional neural networks. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 and 17-35 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Pub. 2022/0366881 by Williams. Regarding claim 1, Williams discloses a computer-implemented method comprising: identifying multiple sounds that exhibit a desired sound aesthetic (para. 61-63 – see audiovisual datasets); determining one or more characteristics that are common to the identified multiple sounds (para. 61-63 – see analyzing audiovisual datasets); obtaining a base sound that does not have the one or more characteristics that are common to the identified multiple sounds (para. 61-63 – see producing audio with a certain sound); and generating a sound effect that exhibits the desired sound aesthetic based at least on the base sound and the one or more characteristics that are common to the identified multiple sounds (para. 61-63 – see producing audio with a certain sound). Regarding claim 17, Williams discloses the computer-implemented method of claim 1, wherein identifying multiple sounds that exhibit a desired sound aesthetic comprises identifying the multiple sounds that exhibit a desired, respective sound aesthetic from a video game, wherein each sound of the multiple sounds is associated with at least one of different locations, different levels, or different characters of the video game (para. 61 – see extracting audio visual features). Regarding claim 18, Williams discloses the computer-implemented method of claim 1, wherein determining one or more characteristics that are common to the identified multiple sounds comprises determining at least one of a frequency range, a pitch shift, a key change, a timbre, a note sequence, a music genre, or a chord progression present and in common in each identified sound of the multiple sounds (para. 61-63 – see noted qualities of the analyzed data). Regarding claim 19, Williams discloses the computer-implemented method of claim 1, wherein determining the one or more characteristics that are common to the identified multiple sounds comprises applying a trained machine learning model to the identified multiple sounds to identify the one or more characteristics common to the identified multiple sounds (para. 53 – see style identification; para. 61-63 – see noted qualities of the analyzed data). Regarding claim 20, Williams discloses the computer-implemented method of claim 19, wherein the trained machine learning model comprises a convolutional neural network (para. 36-38 – see neural networks). Regarding claim 21, Williams discloses the computer-implemented method of claim 1, wherein obtaining a base sound that does not have the one or more characteristics that are common to the identified multiple sounds comprises obtaining the base sound that includes a desired aesthetic to be adapted to include the one or more characteristics that are common to the identified multiple sounds (para. 61-63 – see noted qualities of the analyzed data and the style processing). Regarding claim 22, Williams discloses the computer-implemented method of claim 1, generating a sound effect that exhibits the desired sound aesthetic based at least on the base sound and the one or more characteristics that are common to the identified multiple sounds comprises applying the one or more characteristics that are common to the identified multiple sounds to the base sound such that the generated sound effect exhibits the desired sound aesthetic associated with the one or more characteristics (para. 61-63 – see noted qualities of the analyzed data and transformation/generation of new audio tracks). Regarding claim 23, Williams discloses the computer-implemented method of claim 22, wherein applying the one or more characteristics that are common to the identified multiple sounds to the base sound is performed iteratively until a criterion is achieved (para. 61-63 – see noted qualities of the analyzed data and transformation/generation of new audio tracks). Regarding claim 24, Williams discloses the computer-implemented method of claim 22, wherein applying the one or more characteristics that are common to the identified multiple sounds to the base sound such that the generated sound effective exhibits the desired sound aesthetic associated with the one or more characteristics comprises applying the one or more characteristics and the base sound to a generative adversarial network to generate the sound effect that exhibits the desired sound aesthetic (para. 61-63 – see noted qualities of the analyzed data and transformation/generation of new audio tracks). Regarding claim 25, Williams discloses the computer-implemented method of claim 1, wherein each sound of the multiple identified sounds comprises context data indicating a multimedia context for the sound (para. 53 – see style identification; para. 61-63 – see noted qualities of the analyzed data). Regarding claim 26, Williams discloses the computer-implemented method of claim 25, further comprising: determining a common effect characteristic for each sound of the identified multiple sounds; obtaining context data associated with the base sound; and selecting the common effect characteristic to apply to the base sound based on the context data associated with the base sound (para. 61-63 – see noted qualities of the analyzed data and transformation/generation of new audio tracks). Regarding claim 27, Williams discloses the computer-implemented method of claim 26, wherein selecting the common effect characteristic to apply to the base sound comprises matching the context data of the base sound to the common effect characteristic for each sound of the identified multiple sounds (para. 61-63 – see noted qualities of the analyzed data and transformation/generation of new audio tracks). Regarding claims 28-35, Williams discloses these claims as noted above regarding claims 1 and 17-27, mutatis mutandis. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER J IANNUZZI whose telephone number is (571)272-5793. The examiner can normally be reached M-F 9:30AM-5:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kang Hu can be reached at 571-270-1344. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER J IANNUZZI/ Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Mar 29, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §101, §102, §112
Jan 16, 2026
Response Filed
Feb 23, 2026
Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592126
SYSTEMS AND METHODS OF ELECTRONIC GAMING INCLUDING GESTURE-BASED PLAYER CONSTRUCTED SYMBOL COMBINATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12589304
METHOD AND AR GLASSES FOR AR GLASSES INTERACTIVE DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12589311
PERFORMANCE PREDICTION FOR VIRTUALIZED GAMING APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12589290
FUNCTION BUTTON MODULE WITH VARIABLE FUNCTION LAYOUT AND GAME CONTROLLER
2y 5m to grant Granted Mar 31, 2026
Patent 12586442
SYSTEM AND METHOD FOR IMPLEMENTING SINGLE ACCOUNT AND SINGLE WALLET FOR DISTRIBUTED GAMING SYSTEM ACROSS JURISDICTIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
82%
With Interview (+14.6%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 509 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month