DETAILED ACTION
Claim(s) 1 and 16 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1):
Claim(s) 2,8 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1):
Claim(s) 3 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of LIU (CN 113244627 A) with SEARCH machine translation:
Claim(s) 4 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of LIU (CN 113244627 A) with SEARCH machine translation as applied in claims 3 and 18 further in view of BA (US 2023/0139396 A1):
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of LIU (CN 113244627 A) with SEARCH machine translation as applied in claims 3 and 18 further in view of BA (US 2023/0139396 A1) as applied in claims 4 and 19 further in view of Haacke (US 2021/0275676 A1):
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of MASAJIRO et al. (JP 2021-043603 A) with SEARCH machine translation:
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of MASAJIRO et al. (JP 2021-043603 A) with SEARCH machine translation as applied in claim 6 further in view of SUN et al. (CN 114675742 B) with SEARCH machine translation:
Claim(s) 9,10 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of LIU (CN 105809146 A), referred to as LIU II, with SEARCH machine translation:
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of SUN et al. (CN 113449691 A), referred to as SUN II, with SEARCH machine translation:
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of HE at al. (US 2022/0036059 A1):
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CAO et al. (CN 111670357 A) with SEARCH machine translation:
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CAO et al. (CN 111670357 A) with SEARCH machine translation as applied in claim 13 further in view of RIM et al. (US 2019/0221313 A1):
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CAO et al. (CN 111670357 A) with SEARCH machine translation as applied in claim 13 further in view of RIM et al. (US 2019/0221313 A1) as applied in claim 14 further in view of Clapper (US 2003/0107584 A1):
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CHENG (CN 108647089 A) with SEARCH machine translation:
Response to Amendment
The claim/specification amendment was received 2/11/2026. Claims 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 and 16,17,18,19,21 and 20 pending.
PNG
media_image1.png
797
175
media_image1.png
Greyscale
Response to Arguments
III. Objection to Claim 7
Applicant’s arguments, see remarks, page 9, filed 2/11/2026, with respect to the claim objection of claim 7 have been fully considered and are persuasive. The claim objection of claim 7 has been withdrawn.
IV. Rejections of Claims 1-20 under 35 USC 101
Applicant’s arguments, see remarks, filed 2/11/2026, with respect to 35 USC 101 have been fully considered and are persuasive. The 35 USC 101 rejection of has been withdrawn.
V, Rejections of Claims 1,16 and 20 Under 35 USC 103
Applicant’s arguments, see remarks, filed 2/11/2026, with respect to the rejection(s) of claim(s) 1,16 and 20 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 USC 103:.
Claim(s) 1 and 16 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1):
wherein Sjolund teaches in general artificial intelligence processing training data that exclusively leaves parts in an image in a feature-removal procedure (fig. 5:510: “OBTAIN TRAINING IMAGING DATA”) while excluding other features:
PNG
media_image2.png
881
752
media_image2.png
Greyscale
VI. Rejection of Claims 2-15 and 17-19 Under 35 USC 103
Applicant’s arguments, see reamrks, pages17,18, filed 2/11/2026, with respect to the rejection(s) of claim(s) 2015 and 17-19 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of said CHENG (CN 108647089 A) with SEARCH machine translation.
VII. New Claim 21 New Claim 21 is rejected: Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CHENG (CN 108647089 A) with SEARCH machine translation, wherein CHENG teaches a plurality of policies via policy (i.e., strategy) modules:
PNG
media_image3.png
837
1148
media_image3.png
Greyscale
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 and 16 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1):
PNG
media_image4.png
808
430
media_image4.png
Greyscale
Re 1., YUAN teaches A method performed by an electronic device, the electronic device (in the sealed box to be opened) including memory and a processor having processing circuitry, the method comprising:
acquiring,1 by the electronic device (still in the sealed box),2 a first image (“frame identified as the first type”, pg. 11) corresponding to a first scene identifier (via “the target sub-scene3 identifier”, pg. 16, 1st txt blk);
by using a first artificial intelligence (AI) model (i.e., “an artificial intelligence
(Artificial Intelligence, AI) set in a virtual scene battle through training”, pg. 5: fig. 4: a battle) and the first image (“frame identified as the first type”, pg. 11), identifying, by the processor of the electronic device (still in the box), at least one first 4nd txt blk) object of the first image contributing to (“type”, pg. 11, 3rd txt blk) classification (“and the type of the first video frame is output”, pg. 27, 5th txt blk) of the first image (“frame identified as the first type”, pg. 11) as the first scene identifier (via “the target sub-scene5 identifier”, pg. 16, 1st txt blk); and
generating (via “the first terminal Render”, pg. 13 last txt blk), by the processor of the electronic device (still in the box), training (“battle”, pg. 5, 4th txt blk) data (via identify texture data via “judge6…a target virtual object7 is8 displayed9 in the video frame”, pg. 12, wherein “displayed in the video frame” (comprising texture data) represents being or being equivalent to the target virtual object: fig. 4) for the first (battle-training) AI model, by performing first (“input” “stage of processing” via said “displayed in the video frame”) processing for10 excluding the at least one first object of the first image (“frame identified as the first type”, pg. 11, via fig. 4) , the exclusion of the at least one first object reducing overfitting for the first AI model for the at least one first object, wherein the reduction of overfitting for the first AI model for the at least one first object increases a variety of objects used by the first Al model to identify the first scene identifier.
PNG
media_image5.png
935
871
media_image5.png
Greyscale
YUAN does not teach the difference11 a), b) of claim 1 of:
a) training (data)…
b)12 excluding (the at least one first )13…14
the exclusion…reducing overfitting…wherein the reduction of overfitting…
increases a variety of objects used by (the first Al model).
Lissi teaches the difference a) of claim 1 of:
a) training (“for said AI model” [0014]) (data).
Since YUAN teaches AI (Artificial Intelligence), one of skill in AI can make YUAN’s be as Lissi’s predictably recognizing the change “can be utilized dynamically during gameplay to assist in selecting automatically different camera angles that are the best for the specific type of content being experienced by a gamer.”, Lissi [0074]:
PNG
media_image6.png
2072
1060
media_image6.png
Greyscale
YUAN of the combination of YUAN,Lissi does not teach the remaining difference of claim 1:
b)15 excluding (the at least one first )16…17
the exclusion…reducing overfitting…wherein the reduction of overfitting…
increases a variety of objects used by (the first Al model).
Sjolund teaches the remaining difference of claim 1:
b)18 excluding (“features” [0130] last S) (the at least one first )19…20the (feature) exclusion…reducing overfitting (“to less overfitting” [0131]) …wherein the (less) reduction of overfitting…increases a variety (“such that the classifier learned can be more generalized and lead to less overfitting” [0131]) of (“image” [0073] 4th S) objects (“features” [0073] 4th S) used by (a “separated optimizer” [0146], 5th S: fig. 14) (the first Al model).
Since YUAN of the combination of YUAN,Lissi teaches artificial intelligence, one of skill in the art of artificial intelligence can make YUAN’s of the combination of YUAN,Lissi be as Sjolund’s seeing in the change that “models provide improved results for a variety of imaging processing purposes such as reconstruction, segmentation, and other image processing aspects which may have missing or incomplete data or modeling.” Sjolund [0009] last S.
PNG
media_image7.png
1753
1123
media_image7.png
Greyscale
Claim 16 is rejected like claim 1:
16. (Currently Amended) An electronic (“computer”, YUAN, pg. 33, 5th txt blk) device21 comprising:
memory storing one or more computer programs including computer-executable instructions; and one or more processors,
wherein the computer-executable instructions, when executed by the one or more processors , cause the electronic device to:
acquire a first image corresponding to a first scene identifier,
by using a first artificial intelligence (AI) model and the first image, identify at least one first object of the first image contributing to classification of the first image as the first scene identifier, and
generate training data for the first AI model by performing first processing for excluding the at least one first object of the first image (via rejection of claim 1), the exclusion of the at least one first object reducing overfitting for the first AI model the at least one first object,
wherein the reduction of overfitting for the first AI model for the at least one first object increases a variety of objects used by the first AI model to identify the first scene identifier.
Claim 20 is rejected like claims 1 and 16:
20. (Currently Amended) One or more non-transitory computer-readable storage media (“including a computer program” YUAN, pg. 33, 4th txt blk) storing one or more computer programs including computer-executable instructions that, when executed by one or more processors of an electronic device, cause the electronic device to perform operations, the operations comprising:
acquiring, by the electronic device, a first image corresponding to a first scene identifier;
by using a first artificial intelligence (AI) model and the first image, identifying, by the electronic device, at least one first object of the first image contributing to classification of the first image as the first scene identifier; and
generating, by the electronic device, training data for the first AI model by performing first processing for excluding the at least one first object of the first image (via the rejection of claim 1:.
excluding the at least one first object of the first image, the exclusion of the at least one first object reducing overfitting for the first Al model for the at least one first object,
wherein the reduction of overfitting for the first AI model for the at least one first object increases a variety of objects used by the first AI model to identify the first scene identifier.
Claim(s) 2,8 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1):
PNG
media_image8.png
808
672
media_image8.png
Greyscale
Re 2., YUAN of the combination (illustrated above) of YUAN,Lissi teaches The method of claim 1, wherein the identifying of the at least one first 22object by using the first AI model (i.e., “an artificial intelligence (Artificial Intelligence, AI) set in a virtual scene battle through training”, pg. 5: fig. 4: a battle) and the first image (“frame identified as the first type”, pg. 11) comprises:
identifying an activation (feature) map (“of the first video frame”, pg. 13, 2nd txt blk or “of multiple image blocks”, pg. 13, 4th txt blk) corresponding23 to the first image (“frame identified as the first type”, pg. 11); and
based24 on the activation (feature) map (“of the first video frame”, pg. 13, 2nd txt blk or “of multiple image blocks”, pg. 13, 4th txt blk), identifying the at least one first 25object.
YUAN of the combination (illustrated above) of YUAN,Lissi does not teach the difference of claim 2:
“identifying…activation (map)26…27
based on the activation (map)…”.
FROLOVA teaches the difference of claim 2:
identifying…activation (“most correlated with the newly received image” [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) (map)28…29
(“the referenced set 150A can be compared to sets associated with reference image(s) 170” [0081] 1st S: fig. 1) based on the activation (map)….
Since YUAN of the combination (illustrated above) of YUAN,Lissi teaches AI (Artificial Intelligence), one of skill in the art of AI/machine learning30 can make YUAN’s of the combination (illustrated above) of YUAN,Lissi be as FROLOVA’s predictably recognizing the change “can enable the system to learn and improve from data based on its statistical characteristics rather on predefined rules of human experts”, FROLOVA [0014]:
PNG
media_image9.png
1777
1001
media_image9.png
Greyscale
PNG
media_image10.png
680
1455
media_image10.png
Greyscale
Re 8. (Currently Amended), YUAN of the combination (illustrated above) of YUAN,Lissi, FROLOVA teaches The method of claim 2, wherein the identifying of the at least one first 31 “a” “first” “area”, YUAN pg. 11, 3rd txt blk) object further comprises:
identifying at least one area (via “activation maps…correspond to various … regions” FROLOVA [0011] 7th S), in which (i.e., “activation maps…correspond to various … regions” FROLOVA [0011] 7th S) a feature importance32 (i.e., “similarity”33 [0053] 2nd S of a combination of two similar features/importances/characteristics/significances) satisfies a designated first condition (via “One or more criteria (e.g., a threshold)”. FROLOVA [0053] 2nd S), in the activation map (“most correlated with the newly received image” FROLOVA [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”);
determining that the at least one area (via “activation maps…correspond to various…regions” FROLOVA [0011] 7th S) of the activation map (“most correlated with the newly received image” FROLOVA [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) satisfies at least one second condition (via “One or more criteria (e.g., a threshold)”. FROLOVA [0053] 2nd S); and
identifying the at least one first 34object of the first image (“frame identified as the first type”, YUAN pg. 11), which (i.e., “detecting”35 “a” “first” “area”, YUAN pg. 11, 3rd txt blk) corresponds to (via the above illustrated combination) the at least one area (via “activation maps…correspond to various…regions” FROLOVA [0011] 7th S) based36 on
the identifying of the at least one area (via “activation maps…correspond to various…regions”, FROLOVA [0011] 7th S, “most correlated with the newly received image” FROLOVA [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) and
the determining that the at least one area (via “activation maps… correspond to various…regions” FROLOVA [0011] 7th S) of the activation map (“most correlated with the newly received image” FROLOVA [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) satisfies the at least one second condition (via “One or more criteria (e.g., a threshold)”. FROLOVA [0053] 2nd S).
Claim 17 is rejected like claim 2:
17. (Currently Amended) The electronic device of claim 16, wherein the one or more computer programs further comprise computer-executable instructions to, as at least a part of the identifying of the at least one first object by using the first AI model and the first image:
identify an activation map corresponding to the first image, and
based on the activation map, identify the at least one first object (via the rejection of claim 2:
Claim(s) 3 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of LIU (CN 113244627 A) with SEARCH machine translation:
PNG
media_image11.png
808
736
media_image11.png
Greyscale
Re 3., YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA teaches The method of claim 2, wherein the identifying of the at least one first 37object, based on the activation map (which “can enable the system to learn and improve from data based on its statistical characteristics rather on predefined rules of human experts”, FROLOVA [0014]), comprises:
identifying (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S) at least one area (via reflected “activation maps that correspond38 to various…regions…of the image” FROLOVA [0011] 7th S), in (via “reflected” “data”, FROLOVA [0093] 3rd S) which (either the activation map or image region) a feature (“map”, pg. 13, 2nd txt blk) importance39 satisfies a designated first (via “One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S) condition (“that reflects a satisfactory result”, FROLOVA [0095] 2nd S), in (via “reflected” “data”, FROLOVA [0093] 2nd S) the activation map (“most correlated with the newly received image” FROLOVA: [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”); and
identifying the at least one first 40object, which corresponds to the at least one area (via “activation maps… reflecting …regions” [0011] 7th S) in (via “reflected” “data”, FROLOVA [0093] 3rd S) the activation map (“most correlated with the newly received image” FROLOVA: [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”), of the first image (“frame identified as the first type”, pg. 11).
YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA does not teach the difference of claim 3 of “a (feature)41 importance”42.
LIU teaches the difference of claim 3 of:
(“ranking based on”, pg. 10, 6th txt blk) a (feature)43 importance44.
Since YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA teaches AI (Artificial Intelligence), one of skill in the art of AI can make YUAN’s of the combination (illustrated above) of YUAN,Lissi,FROLOVA be as LIU’s predicably recognizing the change “to improve the model prediction precision”, LIU, page 9, last txt blk:
PNG
media_image12.png
1777
1908
media_image12.png
Greyscale
Claim 18 is rejected like claim 3:
18. (Currently Amended) The electronic device of claim 17, wherein the one or more computer programs further comprise computer-executable instructions to, as at least a part of the identifying of the at least one first object, based on the activation map:
identify at least one area, in which a feature importance satisfies a designated first condition, in the activation map, and
identify the at least one first object, which corresponds to the at least one area in the activation map, of the first image (via the rejection of claim 3:
Claim(s) 4 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of LIU (CN 113244627 A) with SEARCH machine translation as applied in claims 3 and 18 further in view of BA (US 2023/0139396 A1):
PNG
media_image13.png
807
843
media_image13.png
Greyscale
Re 4., YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU teaches The method of claim 3, wherein the identifying of the at least one area (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S), in (via “reflected” “data”, FROLOVA [0093] 3rd S) which (either the activation map or image region) the feature importance satisfies the designated first (via “One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S) condition, in the activation map, comprises identifying the at least one area (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S) in (via “reflected” “data”, FROLOVA [0093] 3rd S) which (either the activation map or image region) the feature (“map”, YUAN pg. 13, 2nd txt blk) importance45 (“to improve the model prediction precision”, LIU, page 9, last txt blk) is equal to or greater (or “exceeds” FORLOVA [0054] 2nd S) than a designated threshold (via “defined”-“One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S) feature (“map”, YUAN pg. 13, 2nd txt blk) importance46.
YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU does not teach the difference of claim 4 of:
“is equal to or greater than a …feature importance”.
BA teach the difference of claim 4 of:
is equal to or greater than a … feature importance (“threshold for the one or more automated feature engineering models” [0066]).
Since LIU of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU teaches feature importance and feature engineering, one of skill in the art of feature importances and feature engineering can make LIU’s of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU be as BA’s predictably recognizing the change “provides for improved modeling accuracy based on inclusion of physically meaningful feature, improved model interpretability as the transformed features represent physical processes and dynamics”, BA [0022], last S:
PNG
media_image14.png
2013
1908
media_image14.png
Greyscale
Claim 19 is rejected like claim 4:
19. The electronic device of claim 18, wherein the one or more computer programs further comprise computer-executable instructions to, as at least a part of the identifying of the at least one area, in which the feature importance satisfies the designated first condition, in the activation map, identify the at least one area in which the feature importance is equal to or greater than a designated threshold feature importance (via the rejection of claim 4:
Re 4., YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU teaches The method of claim 3, wherein the identifying of the at least one area (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S), in (via “reflected” “data”, FROLOVA [0093] 3rd S) which (either the activation map or image region) the feature importance satisfies the designated first (via “One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S) condition, in the activation map, comprises identifying the at least one area (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S) in (via “reflected” “data”, FROLOVA [0093] 3rd S) which (either the activation map or image region) the feature (“map”, YUAN pg. 13, 2nd txt blk) importance47 (“to improve the model prediction precision”, LIU, page 9, last txt blk) is equal to or greater (or “exceeds” FORLOVA [0054] 2nd S) than a designated threshold (via “defined”-“One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S) feature (“map”, YUAN pg. 13, 2nd txt blk) importance48.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of LIU (CN 113244627 A) with SEARCH machine translation as applied in claims 3 and 18 further in view of BA (US 2023/0139396 A1) as applied in claims 4 and 19 further in view of Haacke (US 2021/0275676 A1):
PNG
media_image15.png
805
981
media_image15.png
Greyscale
Re 5., YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU,BA teaches The method of claim 4, wherein the identifying of the at least one area (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S), in (via “reflected” “data”, FROLOVA [0093] 3rd S) which (either the activation map or image region) the feature importance (“to improve the model prediction precision”, LIU, page 9, last txt blk) is equal to or greater than the designated threshold feature importance (that “provides for improved modeling accuracy based on inclusion of physically meaningful feature, improved model interpretability as the transformed features represent physical processes and dynamics”, BA [0022], last S), comprises:
blank-processing (via “Such a corrected set can be provided for processing”, FROLOVA [0030] 7th S, of activation maps) remaining areas (via reflected “activation maps that correspond49 to various…regions…of the image” FROLOVA [0011] 7th S), in (via “reflected” “data”, FROLOVA [0093] 3rd S) which (either the activation map or image region) the feature importance (“to improve the model prediction precision”, LIU, page 9, last txt blk) is less than the designated threshold feature importance (that “provides for improved modeling accuracy based on inclusion of physically meaningful feature, improved model interpretability as the transformed features represent physical processes and dynamics”, BA [0022], last S), in the activation map (via reflected “activation maps that correspond50 to various…regions…of the image” FROLOVA [0011] 7th S);
configuring (“to use the template image”, YUAN pg. 27, 4th txt blk) at least one contour (comprised by a “template51 image”, YUAN pg. 11, 3rd txt blk) for the blank-processed (via “Such a corrected set can be provided for processing”, FROLOVA [0030] 7th S, of activation maps) remaining areas (via reflected “activation maps that correspond52 to various…regions…of the image” FROLOVA [0011] 7th S) in (via “reflected” “data”, FROLOVA [0093] 3rd S) the activation map (“most correlated with the newly received image” FROLOVA: [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”); and
(“The second server acquires the similarity53 between the template image of the target virtual object and multiple regions on the first video frame”, YUAN pg. 12, 2nd txt blk) based on the at least one contour (comprised by a “template54 image”, YUAN pg. 11, 3rd txt blk), identifying the at least one area (i.e., “detecting”55 “a” “first” “area”, YUAN pg. 11, 3rd txt blk) of the activation map (“most correlated with the newly received image” FROLOVA: [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”).
YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU,BA does not teach the difference of claim 5 of:
“blank-(processing)56…57remaining (areas)…
(importance)….is less than…(importance)…
blank-(processed) remaining (areas)”.
Haacke teaches the difference of claim 5:
(“left” [0072] 5th S) blank-(“normalized” [0075]: fig. 1:109: “Generate Output”) (processing)58…59 (“left” [0072] 5th S) remaining (“blank” [0072] 5th S) (areas)…
(importance)….is less than (or “below the threshold” [0072] 5th S: fig. 3: 303: “Compare Similarity to Threshold”)… (importance)…
blank-(normalized: via fig. 1: 101: “Receive Time Resolved MR Data”) (processed) (“left” [0072] 5th S) remaining (“blank” [0072] 5th S: represented as fig. 1:109: “Generate Output”) (areas).
Since YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU,BA teaches image similarity, one of skill in the art of image similarity can make YUAN’s of the combination (illustrated above) of YUAN,Lissi,FROLOVA,LIU,BA be as Haacke’s predictably recognizing the change “enhances60 the SNR61 in the output images”, Haacke [0103] 2nd S, “to provide excellent structural details both in the original images and in the TSMs”, Haacke [0094] 1st S:
PNG
media_image16.png
2806
1904
media_image16.png
Greyscale
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of MASAJIRO et al. (JP 2021-043603 A) with SEARCH machine translation:
PNG
media_image17.png
805
981
media_image17.png
Greyscale
Re 6., YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA teaches The method of claim 2, further comprising:
based on an output layer (illustrated above) of the AI model (i.e., “an artificial intelligence (Artificial Intelligence, AI) set in a virtual scene battle through training”, pg. 5: fig. 4: a battle) and at least one feature map (“of the first video frame”, YUAN, pg. 13, 2nd txt blk) of the AI model (i.e., “an artificial intelligence (Artificial Intelligence, AI) set in a virtual scene battle through training”, pg. 5: fig. 4: a battle: a first video frame), identifying at least one contribution degree; and
based62 on the at least one contribution degree and the at least one feature map (“of the first video frame”, YUAN, pg. 13, 2nd txt blk), identifying (via detecting/ determining etc..) multiple feature importances63 of the activation map (which “can enable the system to learn and improve from data based on its statistical characteristics rather on predefined rules of human experts”, FROLOVA [0014]).
YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA does not teach the difference of claim 2:
“at least one contribution degree; and
based64 on the at least one contribution degree and…
multiple feature importances65”.
MASAJIRO teaches the difference of claim 6:
at least one contribution degree (“calculated for each grid in the image”, pg. 3, 4th txt blk); and
based66 on the at least one contribution degree and (“based on the feature map”, pg. 3, 4th txt blk) …(identifying) multiple feature importances67 (“higher than the predetermined value as the useful area”, pg. 12, 1st txt blk: fig. 2:AR11: t-shirt:
PNG
media_image18.png
725
1086
media_image18.png
Greyscale
Since YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA teaches recognition, one of skill in the art of recognition can make YUAN’s of the combination (illustrated above) of YUAN,Lissi,FROLOVA be as MASAJIRO’s predictably recognizing the change “improving the recognition rate”, MASAJIRO, pg. 11, 9th txt blk.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of MASAJIRO et al. (JP 2021-043603 A) with SEARCH machine translation as applied in claim 6 further in view of SUN et al. (CN 114675742 B) with SEARCH machine translation:
PNG
media_image19.png
805
981
media_image19.png
Greyscale
Re 7. (Currently Amended), YUAN of the combination (illustrated above) of YUAN,Lissi,FROLOVA, MASAJIRO teaches The method of claim 6, wherein the identifying of the at least one contribution degree (“calculated for each grid in the image”, MASAJIRO pg. 3, 4th txt blk) is performed based
1 9yc on Equation 1, where Equation 1 is ag == Yi dy, i kK
wherein aᶜ is the contribution degree, C is a class of the output laver of the AI model. k is an index of the at least one feature map. Z is a product of a row and a column of a matrix of the at least one feature map, i is an i-th element in the matrix, i is a i-th element in the matrix, V° is an output layer, and Akii is at least one feature map
wherein the identifying of the multiple feature importances of the activation map is identified based on Equation 2, where Equation 2 is L° = ReLU Y, a€A*, and
wherein Lc is an activation map, c is a class of the output layer of the AI model, k is an index of the at least one feature map, z is a product of a row and a column of a matrix of the at least one feature map, 1 is an i-th element in the matrix, and j is a j-th element in the matrix.
YUAN of the combination (illustrated above) of YUAN, Lissi, FROLOVA, MASAJIRO does not teach the difference of claim 7:
equations 1 and 2.
SUN teaches equations 1 and 2 at [0206][0213] and corresponding machine translation pg 7, text block:
PNG
media_image20.png
1431
1050
media_image20.png
Greyscale
Since MASAJIRO of the combination (illustrated above) of YUAN,Lissi, FROLOVA, MASAJIRO teaches a contribution degree, one of skill in the art of contribution degrees can make MASAJIRO’s of the combination (illustrated above) of YUAN,Lissi, FROLOVA, MASAJIRO be as SUN’s predictably recognizing the change “can effectively distinguish the related and non-related pixels and extract the characteristic of the classification result with higher contribution degree”, SUN, pg. 7, text block.
Claim(s) 9,10 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of LIU (CN 105809146 A), referred to as LIU II, with SEARCH machine translation:
PNG
media_image21.png
805
981
media_image21.png
Greyscale
Re 9., FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA teaches The method of claim 8, wherein the determining that the at least one area (via “activation maps…correspond to various … regions” FROLOVA [0011] 7th S) of the activation map (“most correlated with the newly received image” [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) satisfies at least one second condition (via “One or more criteria (e.g., a threshold)”. FROLOVA [0053] 2nd S) comprises:
determining that a size of the at least one area (via “activation maps… correspond to various … regions” FROLOVA [0011] 7th S) of the activation map (“most correlated with the newly received image” [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) is equal to or larger than a designated threshold (via “One or more criteria (e.g., a threshold)”. FROLOVA [0053] 2nd S) size.
FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA does not teach the difference of claim 9 of:
“a size…a…size”.
LIU teaches the difference of claim 9:
a size (or “activation parameter68…is 0.9”, pg. 10, 1st txt blk)…
a (designated threshold) size (“is 0.8”, pg. 10).
Since FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA teaches a threshold, one of skill in the art of thresholds can make FROLOVA’s of the combination (illustrated above) of YUAN,Lissi, FROLOVA be as LIU’s predictably recognizing the change “to improve the applicability of the scene recognition”, LIU,pg. 12, 5th txt blk
Claim 10 is rejected like claim 9:
10. The method of claim 8, wherein the determining that the at least one area of the activation map satisfies at least one second condition comprises:
determining that a number of the at least one area of the activation map is equal to or more than a designated threshold number.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of SUN et al. (CN 113449691 A), referred to as SUN II, with SEARCH machine translation:
PNG
media_image22.png
805
981
media_image22.png
Greyscale
Re 11., FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA The method of claim 8, wherein the determining that the at least one area (via “activation maps…correspond to various … regions” FROLOVA [0011] 7th S) of the activation map (“most correlated with the newly received image” [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) satisfies at least one second condition (via “One or more criteria (e.g., a threshold)”. FROLOVA [0053] 2nd S) comprises:
determining that an overlapping degree between the at least one area (via “activation maps…correspond to various … regions” FROLOVA [0011] 7th S) of the activation map (“most correlated with the newly received image” [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) is equal to or less than a threshold (via “One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S) overlapping degree.
FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA does not teach the difference of claim 11 of:
“an overlapping degree…
overlapping degree”.
SUN teaches the difference of claim 11:
an overlapping degree (or “the overlapping degree”, pg. 7, 12th txt blk)…
overlapping degree (“threshold value”, pg. 7,12th txt blk).
Since FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA teaches a threshold, one of skill in the art of thresholds can make FROLOVA’s of the combination (illustrated above) of YUAN,Lissi, FROLOVA be as SUN’s predictably recognizing the change “improving the network model under the premise of not increasing the calculation cost, inheriting the speed advantage of the YOLOv5algorithm. using non-local attention characteristic, not limited to local receptive field the target detection process, but using the global information, enhancing the characteristic fusion ability, considering the real time and effectively improving the accuracy of the identification network.”, SUN, pg. 8, 9th txt blk.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of FROLOVA et al. (US 2021/0081754 A1) as applied in claims 2,8 and 17 further in view of HE at al. (US 2022/0036059 A1):
PNG
media_image23.png
805
981
media_image23.png
Greyscale
Re 12., FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA teaches The method of claim 2, wherein the identifying of the at least one first 69object further comprises:
identifying (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S) at least one area (via reflected “activation maps that correspond70 to various… regions…of the image” FROLOVA [0011] 7th S), in (via “reflected” “data”, FROLOVA [0093] 3rd S) which (either the activation map or image region) a feature (“map”, pg. 13, 2nd txt blk) a feature importance71 satisfies a designated first (via “One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S) condition (“that reflects a satisfactory result”, FROLOVA [0095] 2nd S), in (via “reflected” “data”, FROLOVA [0093] 2nd S) the activation map (“most correlated with the newly received image” FROLOVA: [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”);
determining (“whether”72 FROLOVA [0053] 2nd S) that the at least one area (via “activation maps that correspond73 to various… regions…of the image” FROLOVA [0011] 7th S) of the activation map (“most correlated with the newly received image” FROLOVA: [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) does not satisfy (not satisfy being implied or understood) at least one second condition (via “One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S); and
adjusting74 (“as a candidate for modification75 in the CNN”, FROLOVA [0053] penult S)
(A) a shape,
(B) a size, and/or
(C) a position
of at least a part of the at least one area (via reflected “activation maps that correspond76 to various… regions…of the image” FROLOVA [0011] 7th S),
based77 on
the identifying of the at least one area (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S) and
the determining (“whether”78 FROLOVA [0053] 2nd S) that the at least one area (“as a candidate for modification in the CNN”, FROLOVA [0053] penult S) of the activation map (“most correlated with the newly received image” FROLOVA: [0030] 4th S: fig. 4: “COMPARE ACTIVATION MAP(S)”) does not satisfy (not satisfy being implied or understood via said “whether”79 FROLOVA [0053] 2nd S) the at least one second condition (via “One or more criteria (e.g., a threshold)”, FROLOVA [0053] 2nd S).
FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA does not teach the difference of claim 12 the of Markush element [A,B and/or C]:
(A) a shape,
(B) a size, and/or
(C) a position
HE teaches the difference of claim 12 of the Markush element:
(A) a shape,
(B) (“adjusting”) a size (“of the first reference area through the processed second reference area to obtain the focus area of the human body attribute” [0092], fig. 2B: “Focus area C”: human body:
PNG
media_image24.png
518
857
media_image24.png
Greyscale
) , and/or
(C) a position80
Since FROLOVA of the combination (illustrated above) of YUAN,Lissi, FROLOVA teaches “recognition” (FROLOVA [0027]), one of skill in the art of recognition can make FROLOVA’s of the combination (illustrated above) of YUAN,Lissi, FROLOVA be as HE’s predictably recognizing the change “can better focus on the area that it needs to focus on, thereby improving the accuracy of the human body attribute recognition”, HE [0159] lat S.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CAO et al. (CN 111670357 A) with SEARCH machine translation:
PNG
media_image25.png
805
981
media_image25.png
Greyscale
Re 13., YUAN of the combination (illustrated above) of YUAN,Lissi teaches The method of claim 1, wherein the identifying of the at least one first 81object by using the first AI model (i.e., “an artificial intelligence (Artificial Intelligence, AI) set in a virtual scene battle through training”, pg. 5: fig. 4: a battle) and the first image (“frame identified as the first type”, pg. 11) comprises:
pre-processing the first image (“frame identified as the first type”, pg. 11); and
identifying the at least one first 82object, by using83 the pre-processed first image (“frame identified as the first type”, pg. 11) and the first AI model (i.e., “an artificial intelligence (Artificial Intelligence, AI) set in a virtual scene battle through training”, pg. 5: fig. 4: a battle).
YUAN of the combination (illustrated above) of YUAN,Lissi does not teach the difference of claim 13:
“pre=processing…pre-processed”.
CAO teaches the difference of claim 13:
pre-processing (via “pre-…processing data”, pg. 28, 5th txt blk)
(“The learning data selector can 1310-3 the data needed for learning from the”) pre-processed (“data”, pg. 28, last txt blk).
Since YUAN of the combination (illustrated above) of YUAN,Lissi teaches recognition, one of skill in the art of recognition can make YUAN’s of the combination (illustrated above) of YUAN,Lissi be as CAO’s predictably recognizing the change “becomes more and more intelligent system. the more the AI system is used, the recognition rate of the AI system can be improved more, and the AI system can more accurately understand the user preference, and therefore, the existing rule-based intelligent system is gradually replaced by the AI system based on the deep learning”, CAO, pg. 2, 7th txt blk.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CAO et al. (CN 111670357 A) with SEARCH machine translation as applied in claim 13 further in view of RIM et al. (US 2019/0221313 A1):
PNG
media_image26.png
805
981
media_image26.png
Greyscale
Re 14., YUAN of the combination (illustrated above) of YUAN,Lissi,CAO teaches The method of claim 13, wherein the pre-processing (via “pre-…processing data”, CAO pg. 28, 5th txt blk) of the first image (“frame identified as the first type”, pg. 11) comprises:
converting a size of the first image (“frame identified as the first type”, pg. 11); and
performing blurring on the first image (“frame identified as the first type”, pg. 11).
YUAN of the combination (illustrated above) of YUAN,Lissi,CAO does not teach the difference of claim 14:
“converting a size of…
performing blurring on”.
RIM teaches the difference of claim 14:
converting a size of (“the image to an appropriate size” [0121])…
performing (resulting in an “applied”84 “Gaussian blur filter” RIM [0133]) blurring on (or “to85 an image” [0133]).
Since CAO of the combination (illustrated above) of YUAN,Lissi,CAO teaches pre-processing, one of skill in the art of pre-processing can make CAO’s of the combination (illustrated above) of YUAN,Lissi,CAO be as RIM’s predictably recognizing the change “improving the efficiency and performance of training”, RIM [0130].
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CAO et al. (CN 111670357 A) with SEARCH machine translation as applied in claim 13 further in view of RIM et al. (US 2019/0221313 A1) as applied in claim 14 further in view of Clapper (US 2003/0107584 A1):
PNG
media_image27.png
805
981
media_image27.png
Greyscale
Re 15., YUAN of the combination (illustrated above) of YUAN,Lissi,CAO,RIM teaches The method of claim 14, wherein the performing blurring (resulting in an “applied”86 “Gaussian blur filter” RIM [0133]) on8788 the first image (“frame identified as the first type”, YUAN pg. 11) comprises:
determining a blurring degree, based89 on attributes of at least one (game-configuration) text (via “configuration interface90”, YUAN pg. 17, 5th txt blk: fig. 5:501-505, reproduced below, “displayed as type A-style B-red” YUAN, pg. 18 & “text data”, CAO, pg. 28, 6th txt blk) included9192 (via “including the target virtual object”, YUAN, pg. 21, 2nd txt blk) in the first image (“frame identified as the first type”, YUAN pg. 11); and
performing the blurring (resulting in an “applied”93 “Gaussian blur filter” RIM [0133]) based94 on the determined blurring degree.
PNG
media_image28.png
358
906
media_image28.png
Greyscale
YUAN of the combination (illustrated above) of YUAN,Lissi,CAO,RIM does not teach the difference of claim 15 of:
“determining a blurring degree, based95 on attributes…96
(performing blurring)97 based98 on the determined blurring degree.”
Clapper teaches the difference of claim 15:
(definitely) determining (via “adjust99 the degree of information” [0036] last S, blurring) a blurring degree (via fig. 4:402: “APPLY BLUR TO ALL OR SELECTED GRAPHIC DATA?”), based100 on attributes (“depending in part upon the value of attributes” [0031]) …101
(performing blurring)102 (fig. 3:202) based103 on the (definitely) determined (information) blurring degree.
Since YUAN of the combination (illustrated above) of YUAN,Lissi,CAO,RIM teaches “security” (YUAN, pg. 6, 2nd txt blk), one of skill in the art of security can make YUAN’s of the combination (illustrated above) of YUAN,Lissi,CAO,RIM be as Clapper’s “security” [0060] predictably recognizing the change “relates generally to the field of data processing and, more particularly, to improved systems and methods for providing secure viewing of information on a display.” Clapper [0001].
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over YUAN (WO 2023/029900 A1) with SEARCH machine translation II in view of Lissi (US 2022/0172426 A1) and Sjolund et al. (US 2019/0332900 A1) as applied in claims 1,16,20 further in view of CHENG (CN 108647089 A) with SEARCH machine translation:
PNG
media_image29.png
805
981
media_image29.png
Greyscale
Re 21. (New), YUAN of the combination of YUAN,Lissi,Sjolund teaches The electronic device of claim 16, wherein the one or more computer programs further comprise computer-executable instructions to:
apply a first (“terminal 110”, pg. 6, 1st txt blk) policy based on the first scene identifier,
wherein the first policy controls (via “control information”, pg. 6, 3rd txt blk) (“rendering”, pg. 17, 3rd txt blk) performance104 of the electronic device based on an index determined using the first scene identifier, the first scene identifier improving accuracy (“which can improve the accuracy of the template matching”, pg. 12, 3rd txt blk) of policy determination.
YUAN of the combination of YUAN,Lissi,Sjolund does not teach the difference105 of claim 21 of:
a) a (first) policy …performance106…
b) an index determined using (the first scene identifier).
CHENG teaches the difference of claim 21:
a) a (first) policy107 (comprised by “programs”108, pg. 6, 7th txt blk: figures 4,5: re-created below: “strategy109 module”: policy module)…(“adjust110 the”) performance111 (“of the associated system resource based on the resource allocation strategy”, pg. 9,ll. 7th txt blk)…
b) an (“application operation index”, pg. 13, 7th txt blk) index determined (via fig. 10:1005) using112 (specially via “the preset application113 scene identifier corresponding to the application scene”, pg. 12, 6th txt blk: fig. 10: step 1003) (the first scene identifier (via figures 4,5:
PNG
media_image30.png
1641
1150
media_image30.png
Greyscale
PNG
media_image31.png
1207
1132
media_image31.png
Greyscale
Since YUAN of the combination of YUAN,Lissi,Sjolund teaches a computer application, one of skill in the art of computer-apps can make YUAN’s of the combination of YUAN,Lissi,Sjolund be as CHENG’s seeing in the change “As shown in FIG. 2, in order to improve the running quality of the third party application program, to the data communication between the third-party application program and operating system, so that the operating system can obtain the third party application program the current scene information so as to do targeted system resource adaptation based on the current scene, at the same time, the third-party application program can real-time obtain the operating state of the operating system, thereby performing program optimization based on the running state.” CHENG, page 5, 4th txt blk:
PNG
media_image32.png
1016
1025
media_image32.png
Greyscale
PNG
media_image33.png
1513
897
media_image33.png
Greyscale
Conclusion
The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure.
The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action.
Citation
Relevance
Su (US 20200320289 A1)
Su teaches providing variety to lessen overfit plus excluding features:
[0030] For example, processing and parsing PDL files to automatically generate text images and corresponding training data items can run as batch jobs to generate hundreds, thousands, or even more rendered pages and corresponding training data item files related to print jobs from one or more applications. PDL inputs and/or rendered pages of print jobs from various applications (e.g., Word, Excel, Outlook, Visio) can provide a variety of text features for training the ANN(s) to aid in effectively reducing overfit of an OCR model of the ANN(s). The text features can include, but are not limited to, features regarding: text colors, background colors, text fonts, text sizes, text styles, character-related features, text locations, font effects, font styles, text orientation, punctuation, and page sizes. By using automated and/or batch processing, large numbers of rendered pages and corresponding training data item files can be easily added to an OCR training dataset without tedious human labeling.
[0031] Also, as both text images and training data items are generated from input PDL files, the input PDL files can be specifically created and/or modified to include, change, and/or exclude text features of the text images.
As the closest to the claimed “the exclusion of the at least one first object reducing overfitting” of claim 1.
Wang et al. (JP 2021526269 A w SEARCH machine translation & US 2021/0124928 A1)
Wang teaches increasing variety/diversity and overfitting and explicitly/implicitly teaches exclusion:
JP 2021526269 A:explicit, pg. 10:
“The current frame image may be another one-frame image excluding the reference frame image in the video, may be located before the reference frame image, or may be located after the reference frame image.”
US 2021/0124928 A1:implicit: [0047], last S:
“The current frame image can be a frame image other than the reference frame image in the video , and can be before or after the reference frame image , which is not limited in this embodiment.”
as the closest to the claimed “the exclusion of the at least one first object reducing overfitting” of claim 1.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENNIS ROSARIO/Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
1 Non-limiting claim limitation: comma: the punctuation mark(,) indicating a slight pause in the spoken sentence and used where there is a listing of items or to separate a nonrestrictive clause or phrase (“by the electronic device”) from a main clause (Dictionary.com)
2 Non-limiting claim limitation: comma: the punctuation mark(,) indicating a slight pause in the spoken sentence and used where there is a listing of items or to separate a nonrestrictive clause or phrase (“by the electronic device”) from a main clause (Dictionary.com)
3 scene: an area or sphere of activity, current interest, etc.. (Dictionary.com)
4 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
5 scene: an area or sphere of activity, current interest, etc.. (Dictionary.com)
6 judge: to infer, think, or hold as an opinion; conclude about or assess, wherein infer is defined: to derive by reasoning; conclude or judge from premises or evidence, wherein derive is defined: to trace from a source or origin (comprised by “original pixel value”, pg. 14, 4th txt blk), wherein trace is defined: to ascertain by investigation; find out; discover, wherein find out (verb phrase) is defined: to uncover the true nature, identity, or intentions of (someone), wherein identity is defined: the qualities, beliefs, etc., that distinguish or identify a person or thing. (Dictionary.com)
7 object: Digital Technology. any item that can be individually selected or manipulated, as a picture, data file, or piece of text. (Dictionary.com)
8 Is: 3rd person singular present indicative of be ,wherein be is defined: (used as a copula to connect the subject {“a target virtual object”} with its predicate adjective, or predicate nominative (“displayed in the video frame”), in order to describe, identify, or amplify the subject {“a target virtual object”}), wherein identify is defined: to make, represent to be, or regard or treat as the same or identical, wherein represent is defined: to be the equivalent of; correspond to (Dictionary.com)
9 display: Digital Technology., to output (“texture”, pg. 17, 1st text blk, data) on a screen, wherein data is defined: (used with a singular verb), a body of facts; information, wherein information is defined: (“texture”) data at any stage of processing (input, output, storage, transmission, etc.) (Dictionary.com)
10 for: intended to belong to, or be used in connection with (Dictionary.com)
11 THE CLAIMED INVENTION AS A WHOLE regarding: “excluding”:
The problem is via applicant’s disclosure:
[0003] An electronic device may adjust the performance (which may be, but is not limited to, for example, a central processing unit (CPU) clock within the electronic device) of the electronic device, based on a current state (which may be, but is not limited to, for example, frames per second (FPS) and/or temperature for the output of a display). The electronic device may select a policy for a performance control based on the current state, and may control performance based on the selected policy. For example, in an overheat state, the electronic device may select a policy for reducing a CPU clock, and may control the performance of the electronic device, based on the selected policy. When a specific application (e.g., a game application) is executed, the electronic device may identify whether an index required by the application is satisfied. If the index required by the application is not satisfied, the electronic device may control performance by changing a policy. In order to accurately determine whether the index required for the specific application is satisfied, more accurate monitoring of the current state may be required.
The solution is:
[0090] In operation 605, the trainer may perform first processing for changing a pixel value of at least a part of a first area of the first image620, for example, an area including the visual objects626 and 627, thereby identifying training data630 for the first Al model. The change of the pixel value here may include, for example, black- processing of changing the pixel value to a value corresponding to black, but those skilled in the art may understand that there is no limitation on a pixel value and/or a pixel value pattern after the change. The training data630 for the first Al model may include, for example, black-processed areas636 and 637. As the first processing (e.g., black-processing) is performed on the first area, for example, the area including the visual objects626 and 627, the training data630 for the first Al model may be provided, but the black-processing is merely an example, and there is no limitation on a processing scheme. For example, the first Al model630 may identify, as the first scene identifier, a scene identifier corresponding to the first image620, based on the visual objects626 and 627 of the first image620. There is a possibility that an image, which includes no visual object capable of increasing a possibility of classification as a scene identifier, is not classified as the corresponding scene identifier. For example, there may be a possibility that the first Al model630 is trained to classify a scene identifier, which corresponds to an image that does not include the visual objects626 and 627, as a scene identifier other than the first scene identifier. The trainer may generate the training data630 excluding the visual objects626 and 627 which have made a relatively large contribution to classification as the first scene identifier. The trainer may train the first Al model630 by using the training data630. Accordingly, the first Al model630 may train the first Al model630 so that an image, which includes objects (e.g., at least some of the objects621,objects 621,622,objects 621, 622,623,objects 621, 622, 623,624, and 625) other than the visual objects626 and 627 having made a relatively large contribution to classification as the first scene identifier, is also classified as the first scene identifier. Accordingly, the first Al model630 may
be trained based on various visual objects, and may not be over-fitted for some visual objects.
12 this difference reasonably maps to applicant’s solution to the CPU overheating: an indication of non-obviousness: however, the comma-enclosed claimed “, electronic device,” is not a limitation under the broadest reasonable interpretation of method claim 1.
13 (italics) represent claim limitations already taught
14 ellipses (…) represent claim limitations already taught
15 this difference reasonably maps to applicant’s solution to the CPU overheating: an indication of non-obviousness
16 (italics) represent claim limitations already taught
17 ellipses (…) represent claim limitations already taught
18 this difference reasonably maps to applicant’s solution to the CPU overheating: an indication of non-obviousness
19 (italics) represent claim limitations already taught
20 ellipses (…) represent claim limitations already taught
21 THE CLAIM INVENTION AS A WHOLE regarding “electronic device”:
The over-hearting CPU problem is discussed in the rejection of claim 1. All references (YUAN,Lissi,Sjoland) teach a similar CPU problem regarding the GPU that aids the overburdened CPU, wherein GPU is defined: Computers. graphics processing unit: a secondary processor usually dedicated to performing the calculations necessary for producing computer graphics, lessening the burden on the main processor. (Dictionary.com). Thus it would have been obvious to combine as the rejection of claim 1.
22 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
23 Present participle
24 Past participle
25 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
26 (Italics) represent claim limitations already taught above
27 Ellipses (…) represent claim limitations already taught above
28 (Italics) represent claim limitations already taught above
29 Ellipses (…) represent claim limitations already taught above
30 machine learning: a branch of artificial intelligence in which a computer generates rules underlying or based on raw data that has been fed into it (Dictionary.com)
31 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
32 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
33 similarity: an aspect, trait, or feature like or resembling another or another's, wherein feature is defined: a prominent or conspicuous part or characteristic, wherein prominent is defined: leading, important, or well-known, wherein important is defined: of much or great significance or consequence, wherein significance is defined: importance; consequence, wherein characteristic is defined: a distinguishing feature or quality, wherein feature is defined: a prominent or conspicuous part or characteristic, wherein prominent is defined: leading, important, or well-known, wherein important is defined: of much or great significance or consequence, wherein significance is defined: importance (Dictionary.com)
34 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
35 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
36 “based” a past-participle contributing to the action of “corresponds to the at least one area”
37 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
38 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (Dictionary.com)
39 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
40 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
41 (italics) represent claim limitations taught above
42 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
43 (italics) represent claim limitations taught above
44 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features (for example LIU teaches “characteristic” “characteristic item”, pg. 10, penult txt blk: i.e., feature-importance item) or (3) may be a term of art.
45 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
46 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
47 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
48 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
49 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (Dictionary.com)
50 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (Dictionary.com)
51 template: anything that determines or serves as a pattern; a model, wherein pattern is defined: a distinctive style, model, or form, wherein form is defined: the shape of a thing or person, wherein shape is defined: the quality of a distinct object or body in having an external surface or outline of specific form or figure, wherein outline is defined: the line by which a figure or object is defined or bounded; contour.
52 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (Dictionary.com)
53 similarity: the state of being similar; likeness; resemblance, wherein likeness id defined: the state or fact of being like, wherein like is defined: corresponding or agreeing in general or in some noticeable respect, wherein noticeable is defined: attracting notice or attention; capable of being noticed, wherein notice is defined: to perceive; become aware of, wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
54 template: anything that determines or serves as a pattern; a model, wherein pattern is defined: a distinctive style, model, or form, wherein form is defined: the shape of a thing or person, wherein shape is defined: the quality of a distinct object or body in having an external surface or outline of specific form or figure, wherein outline is defined: the line by which a figure or object is defined or bounded; contour.
55 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
56 (italics) represent claim limitations already taught above
57 Ellipsis (…) represent claim limitations already taught above
58 (italics) represent claim limitations already taught above
59 Ellipsis (…) represent claim limitations already taught above
60 enhance: (tr) to intensify or increase in quality, value, power, etc; improve; augment (Dictionary.com)
61 signal-to-noise ratio: the ratio of one parameter, such as power of a wanted signal to the same parameter of the noise at a specified point in an electronic circuit, etc (Dictionry.com)
62 participle
63 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
64 participle
65 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
66 past-participle participating with the action of (“identifying”).
67 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
68 parameter: Computers., a variable that must be given a specific value during the execution of a program or of a procedure within a program.. wherein variable is defined: Mathematics, Computers.
a quantity or function that may assume any given value or set of values, wherein value is defined: Mathematics. magnitude; quantity; number represented by a figure, symbol, or the like, wherein magnitude is defined: size; extent; dimensions. (Dictiionary.com)
69 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
70 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (Dictionary.com)
71 “feature importance”: (1) appears redundant or (2) suggests that there are contextually two features or (3) may be a term of art.
72 whether: (used to introduce a single alternative, the other being implied or understood, or some clause or element not involving alternatives). See whether or not she has come. I doubt whether we can do any better. (Dictionary.com)
73 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (Dictionary.com)
74 Markush element follows: [A.B and/or C]
75 modify: to change somewhat the form or qualities of; alter partially; amend: Synonyms: reform, shape, adjust, vary, wherein form is defined: the shape of a thing or person. (Dictionary.com)
76 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (Dictionary.com)
77 “based” is past participle contributing to the action of the claimed “adjusting”
78 whether: (used to introduce a single alternative, the other being implied or understood, or some clause or element not involving alternatives). See whether or not she has come. I doubt whether we can do any better. (Dictionary.com)
79 whether: (used to introduce a single alternative, the other being implied or understood, or some clause or element not involving alternatives). See whether or not she has come. I doubt whether we can do any better. (Dictionary.com)
80 Since Markush alternative (B) is taught the Markush element [A,B and/or C] is taught under the broadest reasonable interpretation of claim 12.
81 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
82 detect: to discover the existence of, wherein discover is defined: to notice or realize, wherein notice is defined: to perceive; become aware of., wherein perceive is defined: to recognize, discern, envision, or understand, wherein recognize is defined: to identify as something or someone previously seen, known, etc. (Dictionary.com)
83 “using” is participle contributing to “identifying”
84 apply: to bring into action, wherein action is defined: something done or performed; act; deed. (Dictionary.com)
85 to: (used for expressing contact or contiguity) on; against; beside; upon. (Dictionary.com)
86 apply: to bring into action, wherein action is defined: something done or performed; act; deed. (Dictionary.com)
87 Applicant’s disclosure:
[0039] The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
88 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com)
89 past participle
90 interface: The layout of an application's graphic or textual controls in conjunction with the way the application responds to user activity, wherein textual is defined: of or relating to a text. (Dictionary.com)
91 past participle
92 Claim scope/Claim range: The participle “included” can modify over the range of the nouns (1) “attributes”: first-image-included- attributes; (2) “text”: first-image- included-text; and (3) “attributes” & “text”: first-image-included-attributes- &-text, wherein scope is defined: Linguistics, Logic., the range of words or elements of an expression over which a modifier (i.e. patent examiner) or operator (i.e., patent examiner) has control.
In “old men and women,” “old” may either take “men and women” or just “men” in its scope. (Dictionary.com)
93 apply: to bring into action, wherein action is defined: something done or performed; act; deed. (Dictionary.com)
94 past participle
95 past participle
96 ellipsis (…) represent claim limitations taught above
97 (italics) represent claim limitations taught above
98 past participle
99 adjust: to put in good working order, wherein put is defined: to set, give, or make, wherein set is defined: to determine or fix definitely. (Dictionary.com)
100 past participle
101 ellipsis (…) represent claim limitations taught above
102 (italics) represent claim limitations taught above
103 past participle
104 “performance” is directly receiving the action of “controls”
105 THE CLAIMED INVENTION AS A WHOLE regarding the claimed “policy”:
The CPU problem is discussed in the rejection of claim 1.
Another problem regarding accuracy is via applicant’s disclosure:
[0003] An electronic device may adjust the performance (which may be, but is not limited to, for example, a central processing unit (CPU) clock within the electronic device) of the electronic device, based on a current state (which may be, but is not limited to, for example, frames per second (FPS) and/or temperature for the output of a display). The electronic device may select a policy for a performance control based on the current state, and may control performance based on the selected policy. For example, in an overheat state, the electronic device may select a policy for reducing a CPU clock, and may control the performance of the electronic device, based on the selected policy. When a specific application (e.g., a game application) is executed, the electronic device may identify whether an index required by the application is satisfied. If the index required by the application is not satisfied, the electronic device may control performance by changing a policy. In order to accurately determine whether the index required for the specific application is satisfied, more accurate monitoring of the current state may be required.
The solution (figs. 8A,8B,9: training) to the accuracy problem is:
[0076] FIGS. 3E and 3F may illustrate an Al accuracy for each of different electronic device types according to various embodiments of the disclosure. For example, the Al model may be trained for an electronic device of a first type. FIG. 3E illustrates a scene prediction accuracy341 and a receiver operating characteristic (ROC) curve342 when the Al model is used in the electronic device of the first type. It may be identified that the scene prediction accuracy341 and the ROC curve342 are at relatively high levels. FIG. 3F illustrates a scene prediction accuracy351 and an ROC curve352 when the Al model is used in an electronic device of another type which is different from the first type. It may be identified that the scene prediction accuracy351 and the ROC curve352 are at relatively low levels. Accordingly, it may be required to precisely train the Al model for each application.
Claim 21 does not claim “precisely train the Al model for each application” (figs. 8A,8B,9: training): an indication of obviousness
106 “performance” is directly receiving the action of “controls”
107 BROAD CLAIM LANGUAGE: policy: a definite course of action adopted for the sake of expediency, facility, etc.. (Dictionary.com)
108 program: a plan of action to accomplish a specified end, wherein plan is defined: a scheme or method of acting, doing, proceeding, making, etc., developed in advance, wherein method is defined: a procedure, technique, or way of doing something, especially in accordance with a definite plan, wherein procedure is defined: a particular course or mode of action. (Dictionary.com) .
109 strategy: a plan, method, or series of maneuvers or stratagems for obtaining a specific goal or result, wherein plan is defined: a scheme or method of acting, doing, proceeding, making, etc., developed in advance, wherein scheme is defined: a plan, program, or policy officially adopted and followed, as by a government or business. (Dictionary.com)
110 adjust: (Dictionary.com)
111 “performance” is directly receiving the action of “controls”
112 -ing (of using): a suffix of nouns formed from verbs (use), expressing the action of the verb (use) or its result (“index determined”), product, material, etc. (the art of building; a new building; cotton wadding ), wherein express is defined: to put (thought) into words; utter or state (Dictionary.com)
113 application: the act of putting to a special use or purpose. (Dictionary.com)