Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to the Application filed on 09/02/2025
Claims 1-14 are pending in the case. Claims 1, 11 and 14 are independent claims. Claim 15 have been canceled. Claims 1-6, 8-11 and 14 have been currently amended.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/02/2025 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-14 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Applicant is advised to consult the 2019 PEG for more details of the analysis.
Step 1 Analysis: Is the claim to a process, machine, manufacture or composition of matter? See MPEP § 2106.03.
Claims 1-10 are drawn to a method, claims 11-13 are drawn to an apparatus and claim 14 is drawn to a non-transitory tangible computer-readable medium, therefore each of these claim groups falls under one of four categories of statutory subject matter (machine/products/apparatus, process/method, manufactures and compositions of mater; Step 1). Nonetheless, the claims are directed to a judicially recognized exception of an abstract idea without significant more (Step 2A, see below). Independent claims 1, 11 and 14 are nonverbatim but similar in claim construction, hence share the same rationale that the claimed inventions are directed to non-statutory subject matter as follows:
Regarding claim 1:
Claim 1 recites: A computer-implemented method, comprising:
processing, by an apparatus implementing a first neural network trained to process textual metadata, the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata;
processing, by the apparatus implementing a second neural network trained to process numerical metadata, the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata;
fusing, by the apparatus implementing a third neural network distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file;
and configuring, by the apparatus in response to producing the classification, one or more settings associated with the classification to control reproduction of the media content
Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1).
Claim 1 is directed to an abstract idea, specifically, a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). As well as, a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III).
Independent claim 1 recites in part:
“processing, […], the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata”
The limitation above is broadly and reasonably interpreted as a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). The text describes mapping symbols (text)[Symbol font/0xE0] numbers, vector construction and numerical representation of information, all of which fall within mathematical relationships, mathematical transformation and data encoding using numerical representation. Even if the exact algorithm is not stated, the essence of the step is applying a mathematical model or calculation to produce a numerical vector. See MPEP § 2106.04(a)(2)(I)(C).
“processing, […], the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata”
The limitation above is broadly and reasonably interpreted as a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). The text describes mapping symbols (text)[Symbol font/0xE0] numbers, vector construction and numerical representation of information, all of which fall within mathematical relationships, mathematical transformation and data encoding using numerical representation. Even if the exact algorithm is not stated, the essence of the step is applying a mathematical model or calculation to produce a numerical vector. See MPEP § 2106.04(a)(2)(I)(C).
“fusing, […], the first vector with the second vector to produce a classification of media content in the media file”
The limitation above is broadly and reasonably interpreted as a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III). For example, one can with pen and paper combine the first vector with the second vector to classify the media content in the media file.
configuring, […], one or more settings associated with the classification to control reproduction of the media content
The limitation above is broadly and reasonably interpreted as a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III). For example, one can modify based on the classification settings to control the reproduction of the content.
Step 2A Prong Two Analysis: Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d).
Independent claim 1 recites in part:
“A computer-implemented method, comprising:
[…] by an apparatus implementing a first neural network trained to process textual metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by the apparatus implementing a second neural network trained to process numerical metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by the apparatus implementing a third neural network distinct from the first and second neural networks, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by the apparatus in response to producing the classification, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “apparatus” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05.
First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness.
Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness.
Independent claim 1 recites in part:
“A computer-implemented method, comprising:
[…] by an apparatus implementing a first neural network trained to process textual metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by the apparatus implementing a second neural network trained to process numerical metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by the apparatus implementing a third neural network distinct from the first and second neural networks, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by the apparatus in response to producing the classification, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “apparatus” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter.
Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”.
Regarding claim 11:
Claim 11 recites: An apparatus, comprising: a memory;
and a processor coupled to the memory, wherein the processor is to:
process, by implementing a first neural network trained to process textual metadata, the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata;
process, by implementing a second neural network trained to process numerical metadata, the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata:
fuse, by implementing a third neural network distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file; and
configure, in response to producing the classification, one or more settings associated with the classification to control reproduction of the media content
Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1).
Claim 11 is directed to an abstract idea, specifically, a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). As well as, a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III).
Independent claim 11 recites in part:
process, […] the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata
The limitation above is broadly and reasonably interpreted as a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). The text describes mapping symbols (text)[Symbol font/0xE0] numbers, vector construction and numerical representation of information, all of which fall within mathematical relationships, mathematical transformation and data encoding using numerical representation. Even if the exact algorithm is not stated, the essence of the step is applying a mathematical model or calculation to produce a numerical vector. See MPEP § 2106.04(a)(2)(I)(C).
process, […] the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata
The limitation above is broadly and reasonably interpreted as a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). The text describes mapping symbols (text)[Symbol font/0xE0] numbers, vector construction and numerical representation of information, all of which fall within mathematical relationships, mathematical transformation and data encoding using numerical representation. Even if the exact algorithm is not stated, the essence of the step is applying a mathematical model or calculation to produce a numerical vector. See MPEP § 2106.04(a)(2)(I)(C).
fuse, […] the first vector with the second vector to produce a classification of media content in the media file
The limitation above is broadly and reasonably interpreted as a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III). For example, one can with pen and paper combine the first vector with the second vector to classify the media content in the media file.
configure, in response to producing the classification, one or more settings associated with the classification to control reproduction of the media content
The limitation above is broadly and reasonably interpreted as a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III). For example, one can modify based on the classification settings to control the reproduction of the content.
Step 2A Prong Two Analysis: Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d).
Independent claim 11 recites in part:
“An apparatus, comprising: a memory;
and a processor coupled to the memory, wherein the processor is to” as drafted, amount to computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
[…] by implementing a first neural network trained to process textual metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by implementing a second neural network trained to process numerical metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by implementing a third neural network distinct from the first and second neural networks, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05.
First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness.
Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness.
Independent claim 11 recites in part:
“An apparatus, comprising: a memory;
and a processor coupled to the memory, wherein the processor is to” as drafted, amount to computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
[…] by implementing a first neural network trained to process textual metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by implementing a second neural network trained to process numerical metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by implementing a third neural network distinct from the first and second neural networks, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter.
Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”.
Regarding claim 14:
Claim 14 recites: A non-transitory tangible computer-readable medium storing executable code that, when executed by a processor, cause the processor to:
process, by implementing a first neural network trained to process textual metadata, the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata;
process, by implementing a second neural network trained to process numerical metadata, the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata;
fuse, by implementing a third neural network distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file; and
configure, in response to producing the classification, one or more settings associated with the classification to control reproduction of the media content
Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1).
Claim 14 is directed to an abstract idea, specifically, a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). As well as, a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III).
Independent claim 14 recites in part:
process, […] the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata
The limitation above is broadly and reasonably interpreted as a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). The text describes mapping symbols (text)[Symbol font/0xE0] numbers, vector construction and numerical representation of information, all of which fall within mathematical relationships, mathematical transformation and data encoding using numerical representation. Even if the exact algorithm is not stated, the essence of the step is applying a mathematical model or calculation to produce a numerical vector. See MPEP § 2106.04(a)(2)(I)(C).
process, […] the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata
The limitation above is broadly and reasonably interpreted as a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). The text describes mapping symbols (text)[Symbol font/0xE0] numbers, vector construction and numerical representation of information, all of which fall within mathematical relationships, mathematical transformation and data encoding using numerical representation. Even if the exact algorithm is not stated, the essence of the step is applying a mathematical model or calculation to produce a numerical vector. See MPEP § 2106.04(a)(2)(I)(C).
fuse, […] the first vector with the second vector to produce a classification of media content in the media file
The limitation above is broadly and reasonably interpreted as a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III). For example, one can with pen and paper combine the first vector with the second vector to classify the media content in the media file.
configure, in response to producing the classification, one or more settings associated with the classification to control reproduction of the media content
The limitation above is broadly and reasonably interpreted as a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III). For example, one can modify based on the classification settings to control the reproduction of the content.
Step 2A Prong Two Analysis: Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d).
Independent claim 14 recites in part:
“A non-transitory tangible computer-readable medium storing executable code that, when executed by a processor, cause the processor to” as drafted, amount to computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
[…] by implementing a first neural network trained to process textual metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by implementing a second neural network trained to process numerical metadata, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
[…] by implementing a third neural network distinct from the first and second neural networks, […] as drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d).
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter.
Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”.
Furthermore, regarding dependent claims 2-10 are dependent on claim 1 and claims 12-13 are dependent on claim 11, the claims are directed to a judicial exception without significantly more as highlighted below in the claim limitations by evaluating the claim limitations under Step 2A and 2B:
Claim 2 incorporates the rejection of independent claim 1 and does not integrate the judicial exception into a practical application. Involves, a mental process, as a form of mental evaluation or judgement, and or by a human using a pen and paper. See MPEP § 2106.04(a)(2)(III).
Claim 3 incorporates the rejection of independent claim 1 and does not integrate the judicial exception into a practical application.
Claim 4 incorporates the rejection of independent claim 1 and does not integrate the judicial exception into a practical application.
Claim 5 incorporates the rejection of independent claim 1 and does not integrate the judicial exception into a practical application.
Claim 6 incorporates the rejection of independent claim 1 and does not integrate the judicial exception into a practical application.
Claim 7 incorporates the rejection of claim 6 and does not integrate the judicial exception into a practical application.
Claim 8 incorporates the rejection of independent claim 1 and does not integrate the judicial exception into a practical application.
Claim 9 incorporates the rejection of independent claim 1 and does not integrate the judicial exception into a practical application.
Claim 10 incorporates the rejection of independent claim 1 and does not integrate the judicial exception into a practical application.
Claim 12 incorporates the rejection of independent claim 11 and does not integrate the judicial exception into a practical application.
Claim 13 incorporates the rejection of independent claim 11 and does not integrate the judicial exception into a practical application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3, 6-12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over VELAGAPUDI et al. (Pub No.: 20210232911 A1), hereinafter referred to as VELAGAPUDI, in view of Zhang et al (Pub No.: 20070255755 A1), hereinafter referred to as Zhang and further in view of Wang et al. (US Patent No. 11,379,710 B1), hereinafter referred to as Wang.
With respect to claim 1, VELAGAPUDI disclose:
A computer-implemented method, comprising: processing, by an apparatus implementing a <Subnetwork 130a > trained to process textual metadata, the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata (In FIG. 1 and paragraph [0022], VELAGAPUDI discloses that the Siamese Neural Network (SNN) consists of two identical subnetworks that share the same configuration. The Subnetworks (130a and 130b) operate independently on two different inputs (115a and 115b), which are numerical vectors derived from textual content. Subnetwork 130a receives numerical vectors as inputs. The textual content of the first inputs 115a can be converted from text to a vectorized representation using word embedding information trained using textual historical data associated with a domain for which the training data is to be assessed by the SNN. In paragraph [0024], VELAGAPUDI disclose the subnetwork 130a outputs a first output 125a based on the first input 115a. The text content of the first input 115a may be mapped to a numerical vector using embedding information derived from text-based historical data. In paragraph [0025], VELAGAPUDI further discloses that the outputs comprise an output vector that includes numerical values representing various aspects of the output. )
processing, by the apparatus implementing a <subnetwork 130b> trained to process numerical metadata, the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata (In paragraph [0022], VELAGAPUDI discloses that the <subnetwork 130b> receives numerical vectors as inputs. The textual content of the second input 115b can be converted from text to a vectorized representation using word embedding information trained using textual historical data associated with a domain for which the training data is to be assessed by the SNN. In paragraph [0024], VELAGAPUDI disclose the text content of the second text input 115b may be mapped to a numerical vector using embedding information derived from text-based historical data. In paragraph [0025], VELAGAPUDI disclose the outputs may comprise an output vector that includes numerical values representing various aspects of the output.)
and configuring, by the apparatus in response to producing the classification, one or more settings associated with the classification to control reproduction of the media content (In paragraph [0059], VELAGAPUDI discloses a training data modification unit, which permits the user to save changes to the training data that the user has edited via the user interface provided by the user interface unit. The training data modification unit receives edits to the training data via the user interface and provides those edits to the training data modification unit. The training data modification unit may perform automatic data cleanup based on the output of the data analysis unit to remove outliers from the training data.)
With respect to claim 1, VELAGAPUDI do not explicitly disclose:
fusing, by the apparatus implementing a <fusion model> distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file
A < Subnetwork 130a > as being claimed “first neural network”
A < Subnetwork 130b > as being claimed “second neural network”
A < fusion model > as being claimed “third neural network”
However, Zhang is known to disclose:
Fusing, by the apparatus implementing a <fusion model> distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file (In paragraph [0047], Zhang discloses a fusion model 175that uses the results of the first classification model and the second classification model for determining whether the video belongs to the category.)
VELAGAPUDI and Zhang are analogous pieces of art because both references concern the text classification to one class when the data element may be more closely related to data elements of another class. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify VELAGAPUDI, with each subnetwork (130a and 130b) receiving numerical vectors as inputs as taught by VELAGAPUDI, with fusing results of the models to determine whether the data falls into a category as taught by Zhang. The motivation for doing so would have been to determine whether the data element may have been improperly labeled and may be more closely associated with another class (See[0044] of VELAGAPUDI).
With respect to claim 1, VELAGAPUDI in view of Zhang do not explicitly disclose:
A < Subnetwork 130a > as being claimed “first neural network”
A < Subnetwork 130b > as being claimed “second neural network”
A < fusion model > as being claimed “third neural network”
However, Wang is known to disclose:
A < Subnetwork 130a > as being claimed “first neural network” (In Col. 5, lines 30–37, Wang discloses the first neural network used to process data contained in the user profile dataset. )
A < Subnetwork 130b > as being claimed “second neural network” (In Col. 5, lines 38-45, Wang discloses a second neural network used to process data contained in the model profile dataset.)
A < fusion model > as being claimed “third neural network” (In Col. 5, lines 64–67, Wang discloses the first training output data and second training output data are used to train the third neural network.)
VELAGAPUDI, in view of Zhang and Wang, are analogous pieces of art because both references concern the text classification to one class when the data element may be more closely related to data elements of another class. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Wang, with a third neural network based on first training output data and second training output data as taught by Wang. The motivation for doing so would have been to improve performance for language modeling tasks when sparse data causes overfitting (See [0041] of Zhang).
Regarding claim 3, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. In addition, VELAGAPUDI disclose:
The method of claim 1, wherein the first vector comprises first probability values corresponding to classes of the media content (In paragraph [0031], VELAGAPUDI discloses the first input (115a, which is a vector). Historical data 215 may include textual content, such as text messages, emails, transcripts of meetings, phone calls, or other discussions, and/or other such text-based input.)
Regarding claim 6, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. In addition, VELAGAPUDI disclose:
The method of claim 1, wherein determining the classification of the media content comprises inputting first probability values and second probability values to the third neural network (In Col. 5, lines 64–67, Wang discloses the first training output data and second training output data are used to train the third neural network.)
Regarding claim 7, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 6. In addition, Zhang disclose:
The method of claim 6, wherein the first probability values and the second probability values each include values corresponding to a movie class, a music class, a voice class, an advertisement class, a news class, and a sports class (In paragraph [0013], Zhang discloses a first classification model to determine whether a video clip belongs to a category.)
Regarding claim 8, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. In addition, VELAGAPUDI disclose:
The method of claim 1, wherein: the third neural network produces third probability values, and determining the classification comprises selecting a class corresponding to a greatest value of the third probability values (In paragraph [0054], VELAGAPUDI discloses that the data elements may be ranked by score in column 840 from greatest to smallest, where a higher score indicates that the data element may be an outlier. The report can rank the data elements with the highest score first, so that these elements appear earlier in the report presented to the user.)
Regarding claim 9, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. In addition, VELAGAPUDI disclose:
The method of claim 1, wherein: the first neural network is trained based on training text associated with a set of training media and labels corresponding to the set of training media, and the second neural network is trained based on training numerical metadata associated with the set of training media and the labels corresponding to the set of training media (In paragraph [0022], VELAGAPUDI discloses In FIG. 1 and paragraph [0022], VELAGAPUDI discloses that the Siamese Neural Network (SNN) consists of two identical subnetworks that share the same configuration. The Subnetworks (130a and 130b) operate independently on two different inputs (115a and 115b), which are numerical vectors derived from textual content. Subnetwork 130a receives numerical vectors as inputs. The textual content of the first inputs 115a can be converted from text to a vectorized representation using word embedding information trained using textual historical data associated with a domain for which the training data is to be assessed by the SNN.)
Regarding claim 10, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. In addition, Wang disclose:
The method of claim 1, wherein the third neural network is trained with first training probability values, second training probability values, and labels corresponding to a set of training media (The method of claim 1, wherein the third neural network is trained with first training probability values, second training probability values, and labels corresponding to a set of training media.)
With respect to claim 11, VELAGAPUDI disclose:
An apparatus, comprising: a memory; and a processor coupled to the memory, wherein the processor is to: process, by implementing a <Subnetwork 130a > trained to process textual metadata, the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata (In FIG. 1 and paragraph [0022], VELAGAPUDI discloses that the Siamese Neural Network (SNN) consists of two identical subnetworks that share the same configuration. The Subnetworks (130a and 130b) operate independently on two different inputs (115a and 115b), which are numerical vectors derived from textual content. Subnetwork 130a receives numerical vectors as inputs. The textual content of the first inputs 115a can be converted from text to a vectorized representation using word embedding information trained using textual historical data associated with a domain for which the training data is to be assessed by the SNN. In paragraph [0024], VELAGAPUDI disclose the subnetwork 130a outputs a first output 125a based on the first input 115a. The text content of the first input 115a may be mapped to a numerical vector using embedding information derived from text-based historical data. In paragraph [0025], VELAGAPUDI further discloses that the outputs comprise an output vector that includes numerical values representing various aspects of the output. In paragraph [0081], VELAGAPUDI disclose a main memory and a processor.)
process, by implementing a <subnetwork 130b> trained to process numerical metadata, the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata (In paragraph [0022], VELAGAPUDI discloses that the <subnetwork 130b> receives numerical vectors as inputs. The textual content of the second input 115b can be converted from text to a vectorized representation using word embedding information trained using textual historical data associated with a domain for which the training data is to be assessed by the SNN. In paragraph [0024], VELAGAPUDI disclose the text content of the second text input 115b may be mapped to a numerical vector using embedding information derived from text-based historical data. In paragraph [0025], VELAGAPUDI disclose the outputs may comprise an output vector that includes numerical values representing various aspects of the output.)
and configure, in response to producing the classification, one or more settings associated with the classification to control reproduction of the media content (In paragraph [0059], VELAGAPUDI discloses a training data modification unit, which permits the user to save changes to the training data that the user has edited via the user interface provided by the user interface unit. The training data modification unit receives edits to the training data via the user interface and provides those edits to the training data modification unit. The training data modification unit may perform automatic data cleanup based on the output of the data analysis unit to remove outliers from the training data.)
With respect to claim 11, VELAGAPUDI do not explicitly disclose:
fuse, by implementing a <fusion model> distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file
A < Subnetwork 130a > as being claimed “first neural network”
A < Subnetwork 130b > as being claimed “second neural network”
A < fusion model > as being claimed “third neural network”
However, Zhang is known to disclose:
Fuse, by implementing a <fusion model> distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file (In paragraph [0047], Zhang discloses a fusion model 175that uses the results of the first classification model and the second classification model for determining whether the video belongs to the category.)
VELAGAPUDI and Zhang are analogous pieces of art because both references concern the text classification to one class when the data element may be more closely related to data elements of another class. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify VELAGAPUDI, with each subnetwork (130a and 130b) receiving numerical vectors as inputs as taught by VELAGAPUDI, with fusing results of the models to determine whether the data falls into a category as taught by Zhang. The motivation for doing so would have been to determine whether the data element may have been improperly labeled and may be more closely associated with another class (See[0044] of VELAGAPUDI).
With respect to claim 11, VELAGAPUDI in view of Zhang do not explicitly disclose:
A < Subnetwork 130a > as being claimed “first neural network”
A < Subnetwork 130b > as being claimed “second neural network”
A < fusion model > as being claimed “third neural network”
However, Wang is known to disclose:
A < Subnetwork 130a > as being claimed “first neural network” (In Col. 5, lines 30–37, Wang discloses the first neural network used to process data contained in the user profile dataset. )
A < Subnetwork 130b > as being claimed “second neural network” (In Col. 5, lines 38-45, Wang discloses a second neural network used to process data contained in the model profile dataset.)
A < fusion model > as being claimed “third neural network” (In Col. 5, lines 64–67, Wang discloses the first training output data and second training output data are used to train the third neural network.)
VELAGAPUDI, in view of Zhang and Wang, are analogous pieces of art because both references concern the text classification to one class when the data element may be more closely related to data elements of another class. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Wang, with a third neural network based on first training output data and second training output data as taught by Wang. The motivation for doing so would have been to improve performance for language modeling tasks when sparse data causes overfitting (See [0041] of Zhang).
Regarding claim 12, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. In addition, VELAGAPUDI disclose:
The apparatus of claim 11, wherein the numerical metadata includes data indicating duration, sample rate, video presence, bit depth, and number of channels of the media content (In paragraph [0051], VELAGAPUDI discloses the graph includes the label ID of the class with which the data element is currently associated.)
With respect to claim 14, VELAGAPUDI disclose:
A non-transitory tangible computer-readable medium storing executable code that, when executed by a processor, cause the processor to: process, by implementing a <Subnetwork 130a > trained to process textual metadata, the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata (In FIG. 1 and paragraph [0022], VELAGAPUDI discloses that the Siamese Neural Network (SNN) consists of two identical subnetworks that share the same configuration. The Subnetworks (130a and 130b) operate independently on two different inputs (115a and 115b), which are numerical vectors derived from textual content. Subnetwork 130a receives numerical vectors as inputs. The textual content of the first inputs 115a can be converted from text to a vectorized representation using word embedding information trained using textual historical data associated with a domain for which the training data is to be assessed by the SNN. In paragraph [0024], VELAGAPUDI disclose the subnetwork 130a outputs a first output 125a based on the first input 115a. The text content of the first input 115a may be mapped to a numerical vector using embedding information derived from text-based historical data. In paragraph [0025], VELAGAPUDI further discloses that the outputs comprise an output vector that includes numerical values representing various aspects of the output. In paragraph [0081], VELAGAPUDI disclose a main memory and a processor.)
process, by implementing a <subnetwork 130b> trained to process numerical metadata, the numerical metadata in the media file to generate a second vector of numerical values corresponding to the numerical metadata (In paragraph [0022], VELAGAPUDI discloses that the <subnetwork 130b> receives numerical vectors as inputs. The textual content of the second input 115b can be converted from text to a vectorized representation using word embedding information trained using textual historical data associated with a domain for which the training data is to be assessed by the SNN. In paragraph [0024], VELAGAPUDI disclose the text content of the second text input 115b may be mapped to a numerical vector using embedding information derived from text-based historical data. In paragraph [0025], VELAGAPUDI disclose the outputs may comprise an output vector that includes numerical values representing various aspects of the output.)
and configure, in response to producing the classification, one or more settings associated with the classification to control reproduction of the media content (In paragraph [0059], VELAGAPUDI discloses a training data modification unit, which permits the user to save changes to the training data that the user has edited via the user interface provided by the user interface unit. The training data modification unit receives edits to the training data via the user interface and provides those edits to the training data modification unit. The training data modification unit may perform automatic data cleanup based on the output of the data analysis unit to remove outliers from the training data.)
With respect to claim 14, VELAGAPUDI do not explicitly disclose:
fuse, by implementing a <fusion model> distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file
A < Subnetwork 130a > as being claimed “first neural network”
A < Subnetwork 130b > as being claimed “second neural network”
A < fusion model > as being claimed “third neural network”
However, Zhang is known to disclose:
Fuse, by implementing a <fusion model> distinct from the first and second neural networks, the first vector with the second vector to produce a classification of media content in the media file (In paragraph [0047], Zhang discloses a fusion model 175that uses the results of the first classification model and the second classification model for determining whether the video belongs to the category.)
VELAGAPUDI and Zhang are analogous pieces of art because both references concern the text classification to one class when the data element may be more closely related to data elements of another class. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify VELAGAPUDI, with each subnetwork (130a and 130b) receiving numerical vectors as inputs as taught by VELAGAPUDI, with fusing results of the models to determine whether the data falls into a category as taught by Zhang. The motivation for doing so would have been to determine whether the data element may have been improperly labeled and may be more closely associated with another class (See[0044] of VELAGAPUDI).
With respect to claim 14, VELAGAPUDI in view of Zhang do not explicitly disclose:
A < Subnetwork 130a > as being claimed “first neural network”
A < Subnetwork 130b > as being claimed “second neural network”
A < fusion model > as being claimed “third neural network”
However, Wang is known to disclose:
A < Subnetwork 130a > as being claimed “first neural network” (In Col. 5, lines 30–37, Wang discloses the first neural network used to process data contained in the user profile dataset. )
A < Subnetwork 130b > as being claimed “second neural network” (In Col. 5, lines 38-45, Wang discloses a second neural network used to process data contained in the model profile dataset.)
A < fusion model > as being claimed “third neural network” (In Col. 5, lines 64–67, Wang discloses the first training output data and second training output data are used to train the third neural network.)
VELAGAPUDI, in view of Zhang and Wang, are analogous pieces of art because both references concern the text classification to one class when the data element may be more closely related to data elements of another class. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Wang, with a third neural network based on first training output data and second training output data as taught by Wang. The motivation for doing so would have been to improve performance for language modeling tasks when sparse data causes overfitting (See [0041] of Zhang).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over VELAGAPUDI in view of Zhang, Wang and further in view of Modarresi et al. (Pub No.: 20190164083 A1), hereinafter referred to as Modarresi.
Regarding claim 2, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. VELAGAPUDI, in view of Zhang and Wang do not explicitly disclose:
The method of claim 1, wherein analyzing the textual metadata comprises: removing punctuation and words from the textual metadata
mapping the textual metadata to a vector of numerical values
and inputting the vector of numerical values to the first neural network to produce the first vector
However, Modarresi disclose the limitation:
The method of claim 1, wherein analyzing the textual metadata comprises: removing punctuation and words from the textual metadata (In paragraph [0046], Modarresi discloses removing punctuation and stop words such as “the.” from categorical data.)
mapping the textual metadata to a vector of numerical values (In paragraph [0050], Modarresi discloses each string taken from the parsed categorical data (or directly from the categorical data 108 itself) to generate the vector representations of the numerical data as a sparse vector of count values.)
and inputting the vector of numerical values to the first neural network to produce the first vector (In paragraph [0053], Modarresi discloses categorical data 108 into numerical data 124 using natural language processing and then clustering the numerical data. The clusters 128 form a number of latent classes that is smaller than the number of classes in the categorical data.)
Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of VELAGAPUDI, in view of Zhang and Wang to include Modarresi, with numerical data into a number of clusters that is smaller than the number of classes in the categorical data as taught by Modarresi. The motivation for doing so would have been to improve operation of a computing device to support efficient and accurate use of categorical data (See [0003] of Modarresi)
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over VELAGAPUDI in view of Zhang, Wang and further in view of Shen et al. (Pub No.: 20200218857 A1), hereinafter referred to as Shen.
Regarding claim 4, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. VELAGAPUDI, in view of Zhang and Wang do not explicitly disclose:
The method of claim 1, wherein analyzing the numerical metadata comprises: extracting the numerical metadata
and inputting the numerical metadata to the second neural network to produce the second vector result
However, Shen disclose the limitation:
The method of claim 1, wherein analyzing the numerical metadata comprises: extracting the numerical metadata (In paragraph [0024], Shen discloses extracting the numbers detected and words surrounding the numbers.)
and inputting the numerical metadata to the second neural network to produce the second vector result (In paragraph [0029], Shen discloses inputting the numbers and the associated most correlated feature into a second machine learning module.)
Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of VELAGAPUDI, in view of Zhang and Wang to include Shen, with semantic classification of numerical data in its natural language context. The motivation for doing so would have been to minimize the cost function and improve the classification accuracy (See (Col. 7, lines 12-13) of Shen).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over VELAGAPUDI in view of Zhang, Wang and further in view of Ramsl et al. (Pub No.: 20190244094 A1), hereinafter referred to as Ramsl.
Regarding claim 5, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. VELAGAPUDI, in view of Zhang and Wang do not explicitly disclose:
The method of claim 1, wherein the first vector comprises first probability values and the second vector comprises second probability values corresponding to a same set of classes
However, Ramsl disclose the limitation (In paragraph [0025], Ramsl discloses that a first item vector may indicate that there is a high probability that the first item is associated with the words “blue,” “toy,” “electronic,” and “handheld.” A second item vector may also indicate that there is a high probability that the second item is associated with some or all of the same words as the first item. )
Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of VELAGAPUDI, in view of Zhang and Wang to include Ramsl, with converting, by the neural network, the first and second textual data to a first vector and a second vector as taught by Ramsl. The motivation for doing so would have been to improve as more data is received and as users confirm or modify the recommendations (See[0018] of Ramsl)
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over VELAGAPUDI in view of Zhang, Wang and further in view of ALDA et al. (Pub No.: 20200401877 A1), hereinafter referred to as ALDA.
Regarding claim 13, VELAGAPUDI, in view of Zhang and Wang disclose elements of claim 1. VELAGAPUDI, in view of Zhang and Wang do not explicitly disclose:
The apparatus of claim 11, wherein the processor is to select an audio setting based on the classification
However, ALDA disclose the limitation (In paragraph [0018], ALDA discloses that the values may be pre-programmed and may form a set of possible values that may be selected from and assigned to the category 106 attribute. For example, the values for category 106 may be “Computers,” “Cameras,” “Televisions,” “Telephones,” or other values. A value may be selected and assigned to category 106 for a particular record. )
Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of VELAGAPUDI, in view of Zhang and Wang to include ALDA, with first and second explained variable values may then be associated with the data record as taught by ALDA. The motivation for doing so would have been to increase flexibility and efficiency relative to other data categorization techniques (See [0014] of ALDA)
Response to Arguments
Applicant's arguments filed 09/02/2025 have been fully considered but were not persuasive.
Pertaining to the rejection under 101
The examiner respectfully remains convinced of the currently amended claims 1, 11 and 14 (independent claims) as being directed to a judicial exception (abstract idea) without significantly more. Under the analysis in Step 2A, Prong 1, Claim 1 recites in part: “processing, […], the textual metadata in a media file to generate a first vector of numerical values corresponding to the textual metadata” interpreted as a mathematical concept – mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C). As well as, “fusing, […], the first vector with the second vector to produce a classification of media content in the media file”. The limitation above is broadly and reasonably interpreted as a mental process – concepts performed in the human mind or by a human using a pen and paper" (including an observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2)(III). Combining a first list and second list of numbers that numerically represent data can be performed using pen and paper.
Under the analysis in Step 2A, Prong 2, Claim 1 recites in part: “[…] by implementing a first neural network trained to process textual metadata, […]” drafted, amount to adding the words “apply it” (or an equivalent) with the judicial exception and reciting only the idea of a solution or outcome, i.e., the claim fails to recite details of how a solution to a problem is accomplished because it is unclear how the “Neural Network” is used nor the specification makes it clear how these actions are performed. Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and §2106.04(d). Therefore, is directed to an abstract mental process.
Furthermore, the applicant argues that the claims are directed to “improve media presentation by avoiding loss of artistic intent, further technology improvement….” It is the examiner's opinion that nowhere in the claims show the improvement to media presentation by suggesting a problem and explaining how the invention offers a new way to solve it, or mentioning how it is better than older versions or methods. The limitation should give enough detail so that someone skilled in the field can see that the invention really does improve the invention, not solely relying on the specification. Therefore, the claims are recited in the abstract idea.
.
Arguments are not persuasive and a full 101 analysis is set forth above
Pertaining to Rejection under 103
Applicant’s arguments in regard to the examiner’s rejections under 35 USC 103 are moot in view of the new grounds of rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVEL HONORE whose telephone number is (703)756-1179. The examiner can normally be reached Monday-Friday 8 a.m. -5:30 p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
EVEL HONORE
Examiner
Art Unit 2142
/Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142