Prosecution Insights
Last updated: April 19, 2026
Application No. 18/572,196

PROGRAM, INFORMATION PROCESSING METHOD, RECORDING MEDIUM, AND INFORMATION PROCESSING DEVICE

Non-Final OA §101§102§103
Filed
Dec 20, 2023
Examiner
SHAH, PARAS D
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
2 (Non-Final)
74%
Grant Probability
Favorable
2-3
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
474 granted / 645 resolved
+11.5% vs TC avg
Strong +31% interview lift
Without
With
+31.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
24 currently pending
Career history
669
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 645 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 12/09/2025. Claims 1-22 are pending and have been examined. Any previous objection/rejection not mentioned in this Office Action has been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Change of Examiner The Examiner of record for this application has changed to Paras Shah. Examiner Note In an effort to advance prosecution, the examiner contacted the Applicant’s representative Bradley Lytle on 02/19/2026 but no phone call was returned. Response to Applicant’s Amendments and Arguments The Applicant has amended claims 1-13, 15, 17-19, and 21 to recite that the program is stored on a non-transitory computer readable medium and “non transitory computer readable recording medium”. Hence, the 35 USC 101 rejections with respect to these claims have been withdrawn. The Applicant has also removed “unit” from claims 1-3, 6-8 and 10-22 and added “circuitry”. Therefore, the 35 USC 112(f) interpretations have been withdrawn. After careful review of the prior art from the IDS and search, a new set of prior art rejections have been applied. Therefore, a new non-final is being sent. Duplicate Claims Applicant is advised that should claim 1 be found allowable, claim 15 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 16 and 22 are rejected under 35 U.S.C. 101 because the claims appear to be directed to a software embodiment and not to hardware embodiment, where a machine claim is directed towards a system, apparatus, or arrangement. The claim at present recites a plurality of neural networks, decoder, encoder, process result, sub neural network all relate to data structures which are represented as models. There is no hardware recited with respect to the device. The claim appears to be directed towards a software embodiment. [0083] of the as filed Specification describes the elements of the system being implemented as software alone actualizing the embodiments of the invention. The claimed limitations are capable of being performed as software as described in the above paragraphs, alone since no hardware component is being claimed. Software, alone, are not physical components and thus are not statutory since software do not define any structural and functional interrelationships between the computer programs and other claimed elements of a computer, which permit the computer’s program functionality to be realized. Hence, the stated functions comprise software and is thus not directed to a hardware embodiment. Data structures not claimed as embodied in computer readable media are descriptive material per se and are not statutory because they are not capable of causing functional change in the computer. See e.g., Warmerdam, 33 F.3d at 1361, 31, USPQ2d at 1760 (claim to a data structure per se held nonstatutory). Such claimed data structures do not define any structural and functional interrelationships between data and other claimed aspects of the invention, which permit the data structure’s functionality to be realized. In contrast, a claimed computer readable medium encoded with a data structure defines structural and functional interrelationships between the data structure and the computer software and hardware components which permit the data structure’s functionality to be realized, and is thus statutory. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-4, 8-11, 14-18, and 20-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sawata et al. (“All for One and One for All: Improving Music Separation by Bridging Networks”, cited in IDS). As to claim 1 and 15-16, Sawata teaches a program stored on a non-transitory computer-readable medium for causing a computer to execute an information processing method (see Figure 3b, sect 3.1, where implementation described which requires an inherent processor and associated program to operate and realize the model and where description to sampling of audio based on and training is described), the information processing method comprising: generating, by a neural network, sound source separation information for separating a predetermined sound source signal from a mixed sound signal containing a plurality of sound source signals (see Figure 3b, where input mixture is inputted to the architecture and then output of the separated sources provided); transforming, by an encoder included in the neural network, a feature extracted from the mixed sound signa (see Figure 3b, where left side of the figure the input mixture is passed through affine + BN and then Nonlinearity block); inputting a process result from the encoder to each of a plurality of sub-neural networks included in the neural network (see Figure 3b, where output from averaging block into the middle BLSTM blocks where various blocks can be seen parallel to the top to bottom and then in between has ellipsis); and inputting the process result from the encoder and a process result from each of the plurality of sub-neural networks to a decoder included in the neural network (see Figure 3b, output of the 2nd averaging block on the right side, where input into Affine+BN followed by Nonlinearity blocks are provided in sequence). As to claim 14, apparatus claim 1 and 15-16 and method claim 14 are related as apparatus and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 14 is similarly rejected under the same rationale as applied above with respect to each function of the apparatus claim. As to claim 2, Sawata does teach wherein each of the plurality of sub-neural networks includes a recurrent neural network that uses at least one of a temporally past process result or a temporally future process result for current input (see Figure 3b, BLSTM blocks in the middle) (e.g. The examiner notes that RNNs operate on past data which is an intrinsic feature of RNN architecture, where BLSTM is a form of RNN). As to claim 3, Sawata does teach Wherein the recurrent neural network includes a neural network using a gated recurrent (GRU) algorithm or a long short term memory (LSTM) algorithm (see Figure 3b, where BLSTM is shown in the middle blocks). As to claim 4, Sawata does teach wherein the encoder transforms the feature by reducing a size of the feature (see sect. 1, right column, 2nd paragraph, last four lines, where input is in frequency domain as a spectrogram and see sect 3.1, 1st paragraph where STFT magnitude domain and see Figure 4b, where affine+BN as well as the averaging inputs performed on the input data which reduces the size from the frequency domain to vectorized form which is then inputted into the BLSTMs) . As to claim 8, Sawata does teach wherein the encoder includes affine transformation circuitry (see Figure 3b. where affine transformation is included in the left most blocks). As to claim 9, Sawata does wherein the decoder generates the sound source separation information based on the process result from the encoder and the process result from each of the plurality of sub-neural networks (see Figure 3b, where right most side of the figure shows the separated sources J through 1. based on propagation of the input through the architecture). As to claim 10, Sawata does teach wherein the decoder includes affine transformation circuitry (see Figure 3b, where affine transformation is included in the right most blocks). As to claim 11, Sawata does teach wherein a feature extraction circuitry extracts the feature from the mixed sound signal (see sect. 1, right column, 2nd paragraph, last four lines, where input is in frequency domain as a spectrogram and see sect 3.1, 1st paragraph where STFT magnitude domain). As to claim 17 and 21-22, Sawata teaches a program stored on a non-transitory computer-readable medium for causing a computer to execute an information processing method (see Figure 3b, sect 3.1, where implementation described which requires an inherent processor and associated program to operate and realize the model and where description to sampling of audio based on and training is described), the information processing method comprising: generating, by each of a plurality of neural networks, sound source separation information for separating a different sound source signal from a mixed sound signal containing a plurality of sound source signals (see Figure 3b, where input mixture is inputted to the architecture and then output of the separated sources provided based on the different networks that lead to the respective determined sources); transforming, by an encoder included in one of the plurality of neural networks, a feature extracted from the mixed sound signal (see Figure 3b, where left side of the figure the input mixture is passed through affine + BN and then Nonlinearity block); and inputting a process result from the encoder to a sub-neural network included in each of the plurality of neural networks (see Figure 3b, where output from averaging block into the middle BLSTM blocks where various blocks can be seen parallel to the top to bottom and then in between has ellipsis). As to claim 20, apparatus claim 17 and 21-22 and method claim 20 are related as apparatus and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 20 is similarly rejected under the same rationale as applied above with respect to each function of the apparatus claim. As to claim 18, Sawata does teach wherein each of the plurality of neural networks includes a plurality of the sub-neural networks including the sub-neural network, and the process result from the encoder is input to each of the plurality of sub-neural networks (see Figure 3b, where output from averaging block into the middle BLSTM blocks where various blocks can be seen parallel to the top to bottom and then in between has ellipsis). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sawata in view of Ochiai (WO 2022/034675 A1). As to claim 5, Sawata teaches all of the limitations as in claim 4, above. However, Sawata does not specifically teach wherein the feature and the size of the feature are defined by a multidimensional vector and a number of dimensions of the multidimensional vector, respectively, and the encoder reduces the number of dimensions of the multidimensional vector. Ochiai does teach wherein the feature and the size of the feature are defined by a multidimensional vector and a number of dimensions of the multidimensional vector, respectively, and the encoder reduces the number of dimensions of the multidimensional vector (see bottom of page 3-page 4 first 3 paragraphs, where encoder is described as a NN that maps the acoustic signal to a predetermined feature space and where mixed acoustic signal is converted into first feature amount with D dimensions where h .sub.f ∈ R .sup.D × 1 shows the features in the fth frame, F is the total number of frames, and D is the dimension of the feature space) (e.g. The examiner notes that the conversion of the acoustic signal into the first feature amount reduces the dimensions of the original signal). Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed inventions to have modified the NN architecture as taught by Sawata with the reduction of dimensions as taught by Ochiai in order to be able to map the acoustic signal to a predetermined feature space such that it is usable downstream by the further NNs present (see Ochiai page 3, last 2 lines and page 4, 4th -5th full paragraph). Claim(s) 12, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sawata in view of Soler (“Music Source Separation Using Deep Neural Networks”, 2020) As to claim 12, Sawata teaches all of the limitations as in claim 1 above. However, Sawata does not specifically teach wherein operation circuitry multiplies the feature of the mixed sound signal by the sound source separation information output from the decoder. Soler does teach wherein operation circuitry multiplies the feature of the mixed sound signal by the sound source separation information output from the decoder (see Fig. 3.2, where input from mix spectrograms is multiplied with output from decoding, and see page 25, sections skipped connection and output stage describe where output spectrogram from decoder is multiplied with input magnitude spectrogram). Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed inventions to have modified the NN architecture as taught by Sawata with the multiplication as taught by Soler in order to be able to learn how much each TF bin belongs to the target source (see Soler, page 25, Skipper Connection, last bullet). As to claim 13, Sawata teaches all of the limitations as in claim 12 above. Furthermore, Soler teaches wherein separated sound source signal generation circuitry generates the predetermined sound source signal based an operation result from the operation circuitry (see Figure 3.2, where upward arrow shows target spectrograms generated after the multiplication). As to claim 19, Sawata teaches all of the limitations as in claim 17, above. However, Sawata does not teach wherein operation circuitry included in each of the plurality of neural networks multiplies the feature of the mixed sound signal by the sound source separation information output from a decoder, and filter circuitry separates the predetermined sound source signal based on process results from the operation circuitry. Soler does teach wherein operation circuitry included in each of the plurality of neural networks multiplies the feature of the mixed sound signal by the sound source separation information output from a decoder (see Fig. 3.2, where input from mix spectrograms is multiplied with output from decoding, and see page 25, sections skipped connection and output stage describe where output spectrogram from decoder is multiplied with input magnitude spectrogram), and filter circuitry separates the predetermined sound source signal based on process results from the operation circuitry (see Figure 3.2, where upward arrow shows target spectrograms generated after the multiplication).. Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed inventions to have modified the NN architecture as taught by Sawata with the multiplication as taught by Soler in order to be able to learn how much each TF bin belongs to the target source (see Soler, page 25, Skipper Connection, last bullet). Allowable Subject Matter Claims 6-7 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. None of the prior art cited above teaches the combination of limitations as recited from the independent claims and the limitations set forth in claims 6 and 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARAS D SHAH whose telephone number is (571)270-1650. The examiner can normally be reached Monday-Thursday 7:30AM-2:30PM, 5PM-7PM (EST), Friday 8AM-noon (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PARAS D SHAH can be reached at 571-270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 03/02/2026
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
Sep 23, 2025
Non-Final Rejection — §101, §102, §103
Dec 09, 2025
Response Filed
Mar 04, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586591
SOUND SIGNAL DECODING METHOD, SOUND SIGNAL DECODER, PROGRAM, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579367
TWO-TOWER NEURAL NETWORK FOR CONTENT-AUDIENCE RELATIONSHIP PREDICTION
2y 5m to grant Granted Mar 17, 2026
Patent 12579360
LEARNING SUPPORT APPARATUS FOR CREATING MULTIPLE-CHOICE QUIZ
2y 5m to grant Granted Mar 17, 2026
Patent 12562173
WEARABLE DEVICE CONTROL BASED ON VOICE COMMAND OF VERIFIED USER
2y 5m to grant Granted Feb 24, 2026
Patent 12559026
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.1%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 645 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month