DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/07/2023, 12/05/2023, and 02/04/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2021-0133895, filed on 10/08/2021.
Claim Objections
Claim 7 is objected to because of the following informalities: Claim 7 recites “identify the first attribute value and the third attribute value among the plurality of attribute values”. The claim should recite “identify the first attribute value and a third attribute value among the plurality of attribute values”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “the obtained prediction information” in the limitation “train the second neural network model based on the obtained prediction information”. It is unclear if the obtain prediction information is referring to the “prediction information” in “a processor configured to obtain prediction information on the content by inputting at least one attribute for content to a first neural network model,” or “obtain prediction information for the content by inputting one or more attribute values other than the first attribute value among the plurality of values and the second attribute value to the first neural network model”. Claim 10 also recites similar limitations and is rejected on the same grounds. For examination purposes, Examiner interprets it as the latter. Claims 2-7 and 11-15 are dependent claims that do no cure the deficiencies and are rejected for the same reasons.
Claim 1 recites the limitation “a processor configured to obtain prediction information on the content by inputting at least one attribute for content to a first neural network model”. It is unclear whether the “one attribute for content” is referring to “the content” or a different content. For examination purposes, Examiner interprets the limitation as “a processor configured to obtain prediction information on the content to a first neural network model”. For examination purposes, Examiner interprets it as the latter. Claims 2-7 are dependent claims that do no cure the deficiencies and are rejected for the same reasons.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites:
Step 2A, Prong 1
“based on obtaining a plurality of attribute values for the content, identify a first attribute value among the plurality of attribute values” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify a first attribute among a plurality of attributes. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
“obtain a second attribute value corresponding to the first attribute value by inputting, to a second neural network model, at least one relevant attribute value related to the first attribute value among the plurality of attribute values” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine a second attribute based on a relevant attribute in a plurality of attributes. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
“based on similarity between the first attribute value and the second attribute value being greater than or equal to a first threshold value, obtain prediction information for the content by inputting one or more attribute values other than the first attribute value among the plurality of values and the second attribute value to the first neural network model” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine that the similarity between two attributes are greater than a threshold to obtain prediction information about a content. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
Step 2A, Prong 2
“An electronic device comprising: a memory; and a processor configured to obtain prediction information on the content by inputting at least one attribute for content to a first neural network model, wherein the processor is further configured to” (Using an electronic devoice, memory, processor, and nueral network to make a prediction is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“obtain a second attribute value corresponding to the first attribute value by inputting, to a second neural network model, at least one relevant attribute value related to the first attribute value among the plurality of attribute values” (Using a neural network to determine a second attribute are mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“based on similarity between the first attribute value and the second attribute value being greater than or equal to a first threshold value, obtain prediction information for the content by inputting one or more attribute values other than the first attribute value among the plurality of values and the second attribute value to the first neural network model” (Using a neural network to determine a second attribute are mere instructions to apply the exception using a generic computer component. See 2106.05(f).
“train the second neural network model based on the obtained prediction information” (mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
This judicial exception is not integrated into a practical application.
Step 2B
“An electronic device comprising: a memory; and a processor configured to obtain prediction information on the content by inputting at least one attribute for content to a first neural network model, wherein the processor is further configured to” (mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“obtain a second attribute value corresponding to the first attribute value by inputting, to a second neural network model, at least one relevant attribute value related to the first attribute value among the plurality of attribute values” (Using a neural network to determine a second attribute are mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“based on similarity between the first attribute value and the second attribute value being greater than or equal to a first threshold value, obtain prediction information for the content by inputting one or more attribute values other than the first attribute value among the plurality of values and the second attribute value to the first neural network model” (Using a neural network to determine a second attribute are mere instructions to apply the exception using a generic computer component. See 2106.05(f).
“train the second neural network model based on the obtained prediction information” (mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 2 recites:
Step 2A, Prong 1
“obtain a loss value comprising the prediction information obtained through the first neural network model and label information corresponding to the plurality of attribute values” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine a loss in a prediction based on a label corresponding to attributes. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).Step 2A, Prong 2
“obtain a loss value comprising the prediction information obtained through the first neural network model and label information corresponding to the plurality of attribute values” (Determining a loss based on a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“train the second neural network model based on the loss value” (Determining a loss based on a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“wherein training of the first neural network model is stopped while the second neural network model is being trained” (linking judicial exception to a field of use. See MPEP 2106.05(h).)
This judicial exception is not integrated into a practical application.Step 2B
“obtain a loss value comprising the prediction information obtained through the first neural network model and label information corresponding to the plurality of attribute values” (Determining a loss based on a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“train the second neural network model based on the loss value” (Determining a loss based on a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“wherein training of the first neural network model is stopped while the second neural network model is being trained” (linking judicial exception to a field of use. See MPEP 2106.05(h).)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 3 recites:
Step 2A, Prong 1
“based on similarity between the first attribute value and the second attribute value being less than the first threshold value, obtain prediction information for the content by inputting the plurality of attribute values including the first attribute value to the first neural network model” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can make a prediction about a plurality of attributes including a first attributed based on the similarity between the first and second attribute being under a threshold. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
Step 2A, Prong 2
“based on similarity between the first attribute value and the second attribute value being less than the first threshold value, obtain prediction information for the content by inputting the plurality of attribute values including the first attribute value to the first neural network model” (Making a prediction using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
This judicial exception is not integrated into a practical application.Step 2B
“based on similarity between the first attribute value and the second attribute value being less than the first threshold value, obtain prediction information for the content by inputting the plurality of attribute values including the first attribute value to the first neural network model” (Making a prediction using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 4 recites:
Step 2A, Prong 1
“based on at least one of information about a correlation between the plurality of attribute values and information about distribution of each of the plurality of attribute values, identify the at least one relevant attribute value related to the first attribute value among the plurality of attribute values” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify a relevant feature based on the similarity and distribution of attributes in their mind. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
Step 2A, Prong 2 & 2B
The claim does not recite any additional elements
Claim 5 recites:
Step 2A, Prong 1
“obtain first test values corresponding to the first attribute value by inputting, to the second neural network model, each of remaining attribute values other than the first attribute value among the plurality of attribute values,” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine a value based on remaining attributes that do not include a first attribute. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
“identify first test values having similarity that is greater than a specified second threshold value among the first test values,” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify test values that have a similarity greater than a threshold. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
“identify attribute values corresponding to first test values having similarity with the first attribute value greater than or equal to a specified second threshold value as candidate attribute values to identify the relevant attribute value,” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify attributes that have a similarity equal to or greater than a threshold in their mind. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
“and identify the at least one relevant attribute value related to the first attribute value based on the identified candidate attribute values.” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify a relevant attribute related to a first attribute and an identified attribute in their mind. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
Step 2A, Prong 2
“obtain first test values corresponding to the first attribute value by inputting, to the second neural network model, each of remaining attribute values other than the first attribute value among the plurality of attribute values,” (Determining test values using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
This judicial exception is not integrated into a practical application.Step 2B
“obtain first test values corresponding to the first attribute value by inputting, to the second neural network model, each of remaining attribute values other than the first attribute value among the plurality of attribute values,” (Determining test values using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 6 recites:
Step 2A, Prong 1
“obtain second test values corresponding to the first attribute value by inputting, to the second neural network model, each of the combinations of the identified candidate attribute values” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine test values corresponding to a first attribute based on a combination of candidate attributes in their mind. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
“identify second test values having similarity with the first attribute value greater than a specified third threshold value among the second test values,” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify test values that are similar to a first attribute greater than a threshold in their mind. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
“identify combinations corresponding to second test values having similarity with the first attribute value greater than or equal to a specified third threshold value as candidate attribute values to identify the relevant attribute value,” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify combinations of values that are similar to a first attribute greater than a threshold in their mind. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
“and identify attribute values included in a combination in which similarity between the second test values and the first attribute value is the highest among the identified candidate combinations as the at least one relevant attribute value related to the first attribute value” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify which attribute has the highest similarity. (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.).
Step 2A, Prong 2
“obtain second test values corresponding to the first attribute value by inputting, to the second neural network model, each of the combinations of the identified candidate attribute values” (Determining test values using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
This judicial exception is not integrated into a practical application.Step 2B
“obtain second test values corresponding to the first attribute value by inputting, to the second neural network model, each of the combinations of the identified candidate attribute values” (Determining test values using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 7 recites:
Step 2A, Prong 1
“based on obtaining the plurality of attribute values for the content, identify the first attribute value and the third attribute value among the plurality of attribute values” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can identify a first and second attribute among a plurality of attributes (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.)
“obtain a fourth attribute value corresponding to the third attribute value by inputting at least one relevant attribute value related to the third attribute value, among the plurality of attribute values, to the second neural network model” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can determine a fourth attribute related to a third attribute based on a relevant attribute correlated with a third attribute (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.)
“based on similarity between the third attribute value and the fourth attribute value being greater than or equal to a specified fourth threshold value, obtain prediction information for the content by inputting one or more attribute values other than the first attribute value and the third attribute value among the plurality of attribute values, the second attribute value, and the fourth attribute value to the first neural network model” (This step is a recitation of a mental process that is practical to perform in the human mind. A human can make a prediction based on a similarity between a third and fourth attribute being greater than a threshold using a second and fourth attribute but not a first and third attribute (i.e., observation, evaluation, judgement, opinion). See MPEP § 2106.04(a)(2), subsection III.)
Step 2A, Prong 2
“obtain a fourth attribute value corresponding to the third attribute value by inputting at least one relevant attribute value related to the third attribute value, among the plurality of attribute values, to the second neural network model” (Determining a fourth attribute using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“based on similarity between the third attribute value and the fourth attribute value being greater than or equal to a specified fourth threshold value, obtain prediction information for the content by inputting one or more attribute values other than the first attribute value and the third attribute value among the plurality of attribute values, the second attribute value, and the fourth attribute value to the first neural network model” (Making a prediction using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
This judicial exception is not integrated into a practical application.Step 2B
“obtain a fourth attribute value corresponding to the third attribute value by inputting at least one relevant attribute value related to the third attribute value, among the plurality of attribute values, to the second neural network model” (Determining a fourth attribute using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“based on similarity between the third attribute value and the fourth attribute value being greater than or equal to a specified fourth threshold value, obtain prediction information for the content by inputting one or more attribute values other than the first attribute value and the third attribute value among the plurality of attribute values, the second attribute value, and the fourth attribute value to the first neural network model” (Making a prediction using a neural network is mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 8 recites:
Step 2A, Prong 1
Claim 8 recites at least the abstract idea identified above in claim 1.
Step 2A, Prong 2
“an inputter comprising input circuitry; and an outputter comprising output circuitry;” (Generic computer components. See 2106.05(f).)
“wherein the processor is further configured to: control the outputter to provide the obtained prediction information,” (mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“receive feedback for the prediction information through the inputter” (insignificant extra-solution activity)
“train the second neural network model based on the received feedback” (mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
This judicial exception is not integrated into a practical application.Step 2B
“an inputter comprising input circuitry; and an outputter comprising output circuitry;” (Generic computer components. See 2106.05(f).)
“wherein the processor is further configured to: control the outputter to provide the obtained prediction information,” (mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
“receive feedback for the prediction information through the inputter” (This step appears to be directed to transmitting or receiving information, which is well-understood, routine, and conventional. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); See MPEP 2106.05 (d) (II).)
“train the second neural network model based on the received feedback” (mere instructions to apply the exception using a generic computer component. See 2106.05(f).)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 9 recites:
Step 2A, Prong 1
Claim 9 recites at least the abstract idea identified above in claim 1.
Step 2A, Prong 2
“wherein the first attribute value includes an attribute value not included in learning data for learning of the first neural network or an attribute value having frequency included in the learning data less than a specified fifth threshold value” (linking judicial exception to a field of use. See MPEP 2106.05(h).)
This judicial exception is not integrated into a practical application.
Step 2B
“wherein the first attribute value includes an attribute value not included in learning data for learning of the first neural network or an attribute value having frequency included in the learning data less than a specified fifth threshold value” (linking judicial exception to a field of use. See MPEP 2106.05(h).)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim 10 recites:
Step 2A, Prong 1
See rejection of claim 1. Same rationale applies.
Claim 11 recites:
Step 2A, Prong 1
See rejection of claim 2. Same rationale applies.
Claim 12 recites:
Step 2A, Prong 1
See rejection of claim 3. Same rationale applies.
Claim 13 recites:
Step 2A, Prong 1
See rejection of claim 4. Same rationale applies.
Claim 14 recites:
Step 2A, Prong 1
See rejection of claim 5. Same rationale applies.
Claim 15 recites:
Step 2A, Prong 1
See rejection of claim 6. Same rationale applies.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4, 7-8, 10, and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (US-20190220675-A1) in view of Kowalski et al. (US-20210335029-A1).
Regarding Claim 1,
Guo (US 20190220675 A1) teaches an electronic device comprising:
a memory; and
a processor configured to obtain prediction information (figure 4A; para [0080] “Referring back to FIG. 4A, in block 414, the discriminator 258 may determine whether the first object in the first image and the second object in the second image describe the same object based on the similarity score.” determining whether the first image and second image refer to the same object is prediction information.) on the content by inputting at least one attribute for content (para [0064] “In block 504a, the compact representation generator 254a may map the first initial representation 420a of the first object in the first image to the first compact representation 422a of the first object in the first image. In particular, the compact representation generator 254a may map the first initial feature vector 420a of the first object to the first compact feature vector 422a of the first object.” Initial feature vector includes at least one attribute.) to a first neural network model (figure 4A; para [0065] “As a result, the first initial feature vector 420a may be transformed into the first compact feature vector 422a of the first object in the exact same way as the n.sup.th initial feature vector 420n being transformed to the n.sup.th compact feature vector 422n of the second object. In some embodiments, the compact representation generators 254a . . . 254n and the similarity scorer 256 may implement the model in the form of a neural network including n subnetworks. The compact representation generator 254a may implement a first subnetwork of the neural network. The compact representation generator 254n may implement an n.sup.th subnetwork of the neural network. The n subnetworks of the neural network may be identical to each other.” The compact representation generator is a neural network. figure 4A shows two compact representation generators.), wherein the processor is further configured to:
based on obtaining a plurality of attribute values for the content, identify a first attribute value among the plurality of attribute values (figure 4A; para [0055] “For example, the compact representation generator 254 may map the initial feature vector representing the detected object to a compact feature vector representing the detected object. In some embodiments, the compact feature vector may comprise a fewer number of modality features (and thus, having a lower feature dimension and smaller data size) as compared to the corresponding initial feature vector.” The compact feature vector having at least a first attribute is determined from an initial feature vector. Also see figure 7 which shows at least a first attribute.),
obtain a second attribute value corresponding to the first attribute value by inputting, to a second neural network model, at least one relevant attribute value related to the first attribute value among the plurality of attribute values (figure 4A; para [0055] figure 4A shows a first and second compact generator as described in para [0065] and each produce their respective compact feature vectors.),
based on similarity between the first attribute value and the second attribute value being greater than or equal to a first threshold value, obtain prediction information for the content by inputting one or more attribute values (para [0080] “ In some embodiments, the discriminator 258 may determine whether the similarity score between the first compact representation 422a of the first object and the n.sup.th compact representation 422n of the second object satisfies a predetermined score threshold (e.g., more than 50%). Responsive to determining that the similarity score between the first compact representation 422a of the first object and the n.sup.th compact representation 422n of the second object satisfies the predetermined score threshold, the discriminator 258 may determine that the first object in the first image and the second object in the second image represent the same object.”), and
train the second neural network model based on the obtained prediction information (para [0060] “In some embodiments, during the training process of the model, the similarity scorer 256 may also compute the feedback difference between the similarity score and the predetermined target output. In block 508, the similarity scorer 256 may provide the feedback difference to the compact representation generators 254a . . . 254n to train the model.”).
Guo does not explicitly disclose
obtain prediction information for the content by inputting one or more attribute values other than the first attribute value among the plurality of values and the second attribute value to the first neural network model
However, Kowalski (US 20210335029 A1) teaches
obtain prediction information for the content by inputting one or more attribute values other than the first attribute value among the plurality of values and the second attribute value to the first neural network model (para [0087] “One-shot learning by fine tuning is used in some examples. It is not essential to use one-shot learning by fine tuning. One-shot learning by fine tuning comprises pre-training the encoder and the decoders (using the first and second stages of FIG. 9 or only the second stage of FIG. 9) and then training again using real images and with a loss function that encourages the neural renderer to reduce an identity gap between a face depicted in the real image and in the output image. It is unexpectedly found that one-shot learning by fine tuning is effective. One-shot learning modifies the embeddings and the whole decoder and it is surprising that control of the output image is still possible through the factorized embeddings even after one-shot learning by fine tuning has been done.” The decoder (i.e. first model) is pretrained based on embeddings (i.e., first attributes) from synthetic images. Later, the decoder later inputs an embedding (i.e., second attribute) produced by the real encoder as shown in figure 9 but not the first embedding.)
Guo and Kowalski are analogous because they are both directed to the field of neural networks.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Guo with the one-shot learning of Kowalski.
Doing so would allow for fine-tuning the neural network to minimize the difference between the output and target output (Kowalski para [0087]).
Regarding Claim 3,
Guo and Kowalski teach the electronic device of claim 1. Guo further teaches wherein the processor is further configured to, based on similarity between the first attribute value and the second attribute value being less than the first threshold value (para [0074] “As discussed above, the similarity score between the first compact feature vector 422a of the first object and the n.sup.th compact feature vector 422n of the second object is 70%, rather than 100% as indicated by the predetermined target output.”), obtain prediction information for the content by inputting the plurality of attribute values including the first attribute value to the first neural network model (para [0076] “The implementation described above is advantageous for processing object similarity especially in vehicular context. As the model implemented by the compact representation generators 254 is subjected to multiple training cycles with multiple images, the compact representation generators 254 may learn to only include in the compact feature vectors 422 the modality features that are discriminative in representing the detected objects in each particular scenario, and thus most useful for the purpose of determining the object similarity.” The compact feature vector is adjusted and still inputted into the first compact generator (i.e., neural network).).
Regarding Claim 4,
Guo and Kowalski teach the electronic device of claim 1. Guo further teaches wherein the processor is further configured to, based on at least one of information about a correlation between the plurality of attribute values and information about distribution of each of the plurality of attribute values, identify the at least one relevant attribute value related to the first attribute value among the plurality of attribute values (para [0074] “As discussed above, the similarity score between the first compact feature vector 422a of the first object and the n.sup.th compact feature vector 422n of the second object is 70%, rather than 100% as indicated by the predetermined target output. Because of the feedback difference of 30% between the similarity score and the predetermined target output, the compact representation generator 254 may determine that other modality features (e.g., the context features, the viewpoint features, etc.) may be more determinative than the texture features and the color features (and thus, more efficient and distinguishable in representing the detected objects) if the initial feature representations of the detected objects includes the texture features and the color features within these particular ranges of feature values.” Similarity scores represents a correlation between the plurality of distributions. The range of values indicates a distribution. Context features can be more relevant thus given more weight than other features.).
Regarding Claim 7,
Guo and Kowalski teach the electronic device of claim 1. Guo further teaches wherein the processor is further configured to:
based on obtaining the plurality of attribute values for the content, identify the first attribute value and the third attribute value among the plurality of attribute values (figure 4A; para [0055] “For example, the compact representation generator 254 may map the initial feature vector representing the detected object to a compact feature vector representing the detected object. In some embodiments, the compact feature vector may comprise a fewer number of modality features (and thus, having a lower feature dimension and smaller data size) as compared to the corresponding initial feature vector.” The compact feature vector has at least a first and second attribute which is determined from an initial feature vector. Also see figure 7.),
obtain a fourth attribute value corresponding to the third attribute value by inputting at least one relevant attribute value related to the third attribute value, among the plurality of attribute values, to the second neural network model (figure 4A; para [0055] figure 4A shows a first and second compact generator as described in para [0065] and each produce their respective compact feature vectors.), and
based on similarity between the third attribute value and the fourth attribute value being greater than or equal to a specified fourth threshold value, obtain prediction information for the content by inputting one or more attribute values (para [0080] “ In some embodiments, the discriminator 258 may determine whether the similarity score between the first compact representation 422a of the first object and the n.sup.th compact representation 422n of the second object satisfies a predetermined score threshold (e.g., more than 50%). Responsive to determining that the similarity score between the first compact representation 422a of the first object and the n.sup.th compact representation 422n of the second object satisfies the predetermined score threshold, the discriminator 258 may determine that the first object in the first image and the second object in the second image represent the same object.”) other than the first attribute value and the third attribute value among the plurality of attribute values, the second attribute value, and the fourth attribute value to the first neural network model (para [0075] “As a result, when the compact representation generators 254 process similar initial feature vectors 420 (e.g., the initial feature vectors 420 having the texture features and/or the color features of approximately the same feature values), the texture features and/or the color features are likely filtered out from the initial feature vectors 420 to generate the corresponding compact feature vectors 422 of the detected objects”).
Regarding Claim 8,
Guo and Kowalski teach the electronic device of claim 1. Guo further teaches further comprising:
an inputter comprising input circuitry; and an outputter comprising output circuitry (para [0095] “Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.”);
wherein the processor is further configured to:
control the outputter to provide the obtained prediction information, receive feedback for the prediction information through the inputter (para [0072] “The similarity scorer 256 may compare the similarity score to the predetermined target output, and therefore determine the feedback difference between the similarity score and the predetermined target output to be 30%.”), and
train the second neural network model based on the received feedback (para [0072] “In some embodiments, the similarity scorer 256 may provide the feedback difference between the similarity score and the predetermined target output to the compact representation generators 254a . . . 254n for training the model.”).
Regarding Claim 10,
Claim 10 is the method corresponding to the device of claim 1. Claim 10 is substantially similar to claim 1 and is rejected on the same grounds.
Regarding Claim 12,
Claim 12 is the method corresponding to the device of claim 3. Claim 12 is substantially similar to claim 3 and is rejected on the same grounds.
Regarding Claim 13,
Claim 13 is the method corresponding to the device of claim 4. Claim 13 is substantially similar to claim 4 and is rejected on the same grounds.
Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Guo/Kowalski, as applied above, and further in view of Kim et al. (US-20220284921-A1).
Regarding Claim 2,
Guo and Kowalski teach the electronic device of claim 1.
Kowalski further teaches
and wherein training of the first neural network model is stopped while the second neural network model is being trained (para [0079] “With reference to FIG. 9 a first stage 900 involves omitting the real data encoder 904 and randomly generating 906 embeddings of real images. During the first stage the synthetic data encoder and the decoder are trained using backpropagation 908 and using synthetic images.” The real data encoder is stopped while the decoder is trained.)
Guo and Kowalski do not explicitly disclose
wherein the processor is further configured to: obtain a loss value comprising the prediction information obtained through the first neural network model and label information corresponding to the plurality of attribute values, and train the second neural network model based on the loss value,
However, Kim (US 20220284921 A1) teaches
wherein the processor is further configured to: obtain a loss value comprising the prediction information obtained through the first neural network model and label information corresponding to the plurality of attribute values (para [0053] “A label of a voice section or a non-voice section is assigned to each frame time of the input signal.”), and train the second neural network model based on the loss value (para [0057] “In the self-supervised learning according to the present embodiment, the first neural network and the second neural network are trained on the basis of a loss function using the synchronized correlation coefficient as a positive sample and the unsynchronized correlation coefficient as a negative sample. By using such a loss function, the first neural network and the second neural network are trained such that the synchronized correlation coefficient increases and the unsynchronized correlation coefficient decreases, that is, to reinforce discrimination for voice and non-voice of the acoustic feature and the non-acoustic feature.”),
Guo, Kowalski, and Kim are analogous because they are directed towards comparing feature values between two neural networks.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Guo and Kowalski with the loss function of Kim.
Doing so would allow for updating the learning parameters of the neural network according to a contrastive loss in accordance to any optimization method (Kim para [0091]).
Regarding Claim 11,
Claim 11 is the method corresponding to the device of claim 2. Claim 11 is substantially similar to claim 2 and is rejected on the same grounds.
Claims 5-6, 9, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Guo/Kowalski, as applied above, and further in view of Kormilitsin et al. (US-20230334360-A1).
Regarding Claim 5,
Guo and Kowalski teach the electronic device of claim 1.
Guo and Kowalski do not explicitly disclose
wherein the processor is further configured to:
obtain first test values corresponding to the first attribute value by inputting, to the second neural network model, each of remaining attribute values other than the first attribute value among the plurality of attribute values,
identify first test values having similarity that is greater than a specified second threshold value among the first test values,
identify attribute values corresponding to first test values having similarity with the first attribute value greater than or equal to a specified second threshold value as candidate attribute values to identify the relevant attribute value, and
identify the at least one relevant attribute value related to the first attribute value based on the identified candidate attribute values.
However, Kormilitsin (US 20230334360 A1) teaches
obtain first test values corresponding to the first attribute value by inputting, to the second neural network model, each of remaining attribute values other than the first attribute value among the plurality of attribute values (para [0061] “FIG. 8 shows the test data 704 and target data 702 being inputted to the trained prediction model 708 to obtain a second performance measure 810(1) that quantifies the performance of the prediction model 708 in the absence of the first feature f.sub.1. However, instead of training a new model with one fewer feature, which would incur significant computational resources, the test data 704(1) for the first feature f.sub.1 is randomized (see randomization 802) to create randomized test data 804(1) that is inputted to the trained prediction model 708. This randomization is equivalent to replacing the test data 704(1) with noise. The second performance measure 810(1) is therefore equivalent to the first performance measure 710 except that the impact of the first feature f.sub.1 has been essentially excluded.”),
identify first test values having similarity that is greater than a specified second threshold value among the first test values (para [0043] “ =A high convergence score (e.g., above a threshold) may indicate that the updated bucket ranking 418 is so similar to the initial bucket ranking 402 that an additional iteration is unlikely to yield significant additional changes to bucket rank (i.e., the method 400 has converged).” para [0052] “high convergence score may indicate that the updated candidate-feature ranking 518 is so similar to the initial candidate-feature ranking 502 that an additional iteration is unlikely to yield significant additional changes to candidate-feature rank (i.e., the method 400 has converged).”),
identify attribute values corresponding to first test values having similarity with the first attribute value greater than or equal to a specified second threshold value as candidate attribute values to identify the relevant attribute value (para [0043] “ =A high convergence score (e.g., above a threshold) may indicate that the updated bucket ranking 418 is so similar to the initial bucket ranking 402 that an additional iteration is unlikely to yield significant additional changes to bucket rank (i.e., the method 400 has converged).” para [0052] “high convergence score may indicate that the updated candidate-feature ranking 518 is so similar to the initial candidate-feature ranking 502 that an additional iteration is unlikely to yield significant additional changes to candidate-feature rank (i.e., the method 400 has converged).”), and
identify the at least one relevant attribute value related to the first attribute value based on the identified candidate attribute values (para [0028] “The first candidate feature f.sub.1 has the highest score s.sub.1.sup.(0) and therefore may be referred to as the most-relevant candidate feature. Similarly, the second candidate feature f.sub.2 has the second highest score s.sub.2.sup.(0) and therefore may be referred to as the second most-relevant candidate feature.”).
Guo, Kowalski, and Kormilitsin are analogous because they are directed to the field of neural networks.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Guo and Kowalski with the feature selection of Kormilitsin.
Doing so would allow reducing the number of non-relevant features to improve computation resource usage and the model’s accuracy (Kormilitsin para [0008]).
Regarding Claim 6,
Guo, Kowalski, and Kormilitsin teach the electronic device of claim 5. Kormilitsin further teaches wherein the processor is further configured to:
obtain second test values corresponding to the first attribute value by inputting, to the second neural network model (para [0060] “In FIG. 7, test data 704 associated with each of the p.sub.1 candidate features 102 in the first bucket 110(1) are inputted to a trained prediction model 708.”), each of the combinations of the identified candidate attribute values,
identify second test values having similarity with the first attribute value greater than a specified third threshold value among the second test values (Abs. “The Swiss tournament is advantageous when there are so many candidate feature variables that it is computationally infeasible to test all combinations of these candidates.”),
identify combinations corresponding to second test values having similarity with the first attribute value greater than or equal to a specified third threshold value as candidate attribute values to identify the relevant attribute value (para [0043] “ =A high convergence score (e.g., above a threshold) may indicate that the updated bucket ranking 418 is so similar to the initial bucket ranking 402 that an additional iteration is unlikely to yield significant additional changes to bucket rank (i.e., the method 400 has converged).” para [0052] “high convergence score may indicate that the updated candidate-feature ranking 518 is so similar to the initial candidate-feature ranking 502 that an additional iteration is unlikely to yield significant additional changes to candidate-feature rank (i.e., the method 400 has converged).”), and
identify attribute values included in a combination in which similarity between the second test values and the first attribute value is the highest among the identified candidate combinations as the at least one relevant attribute value related to the first attribute value (para [0028] “The first candidate feature f.sub.1 has the highest score s.sub.1.sup.(0) and therefore may be referred to as the most-relevant candidate feature. Similarly, the second candidate feature f.sub.2 has the second highest score s.sub.2.sup.(0) and therefore may be referred to as the second most-relevant candidate feature.”).
Guo, Kowalski, and Kormilitsin are analogous because they are directed to the field of neural networks.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Guo and Kowalski with the feature selection of Kormilitsin.
Doing so would allow reducing the number of non-relevant features to improve computation resource usage and the model’s accuracy (Kormilitsin para [0008])
Regarding Claim 9,
Guo and Kowalski teach the electronic device of claim 1.
wherein the first attribute value includes an attribute value not included in learning data for learning of the first neural network or an attribute value having frequency included in the learning data less than a specified fifth threshold value.
However, Kormilitsin (US 20230334360 A1) teaches
wherein the first attribute value includes an attribute value not included in learning data for learning of the first neural network or an attribute value having frequency included in the learning data less than a specified fifth threshold value (para [0061] “The second performance measure 810(1) is therefore equivalent to the first performance measure 710 except that the impact of the first feature f.sub.1 has been essentially excluded.”),
Guo, Kowalski, and Kormilitsin are analogous because they are directed to the field of neural networks.
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Guo and Kowalski with the feature selection of Kormilitsin.
Doing so would allow reducing the number of non-relevant features to improve computation resource usage and the model’s accuracy (Kormilitsin para [0008]).
Regarding Claim 14,
Claim 14 is the method corresponding to the device of claim 5. Claim 14 is substantially similar to claim 5 and is rejected on the same grounds.
Regarding Claim 15,
Claim 15 is the method corresponding to the device of claim 6. Claim 15 is substantially similar to claim 6 and is rejected on the same grounds.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY K NGUYEN whose telephone number is (571)272-0217. The examiner can normally be reached Mon - Fri 7:00am-4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 5712723768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HENRY NGUYEN/Examiner, Art Unit 2121