Prosecution Insights
Last updated: April 19, 2026
Application No. 18/195,868

OBJECT RECOGNITION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

Non-Final OA §101§103§112
Filed
May 10, 2023
Examiner
KIM, SEHWAN
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
86 granted / 144 resolved
+4.7% vs TC avg
Strong +66% interview lift
Without
With
+65.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
35 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
20.8%
-19.2% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 144 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Note The Examiner encourages Applicant to schedule an interview to discuss issues related to, for example, the rejections noted below under 35 U.S.C § 102, 101 and 103, for moving forward allowance. Providing supporting paragraph(s) for each limitation of amended/new claim(s) in Remarks is strongly requested for clear and definite claim interpretations by Examiner. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No CN202111109153.6, filed on 09/22/2021. Claim Objections Claim(s) 1-18 is/are objected to because of the following informalities. Claim(s) 1 is/are objected to because of the following informalities: it appears that “one first sample object” (line 9) needs to read “a first sample object” or something else. Appropriate correction is required. In addition, claim(s) 7, 13 is/are objected to for the same reason. it appears that “of to” (line 13) needs to read “of” or something else. Appropriate correction is required. In addition, claim(s) 7, 13 is/are objected to for the same reason. it appears that “the annotation label and second label” (line 16) needs to read “the corresponding annotation label and second label” or something else. Appropriate correction is required. In addition, claim(s) 7, 13 is/are objected to for the same reason. In addition, claim(s) 2 (line 2), 8, 14 is/are objected for the same reason. In addition, claim(s) 4 (line 5), 10, 16 is/are objected for the same reason. In addition, claim(s) 5 (line 6), 11, 17 is/are objected for the same reason. Claim(s) 2 is/are objected to because of the following informalities: it appears that “a second label” (line 1) needs to read “the second label”, or something else. Appropriate correction is required. In addition, claim(s) 8, 14 is/are objected to for the same reason. In addition, claim(s) 3 (line 4), 9, 15 is/are objected to for the same reason. In addition, claim(s) 4 (line 4), 10, 16 is/are objected to for the same reason. In addition, claim(s) 5 (line 5), 11, 17 is/are objected to for the same reason. Claim(s) 3 is/are objected to because of the following informalities: it appears that “the annotation label and second label of each first sample object, and the first association relationship” (line 5) needs to read “the corresponding annotation label and second label and the corresponding first association relationship of each of the plurality of first sample objects” since it indicates the last limitation of claim 1, or something else. Appropriate correction is required. In addition, claim(s) 9, 15 is/are objected to for the same reason. Claim(s) 6 is/are objected to because of the following informalities: it appears that “object when the preset condition is satisfied is” (line 10) needs to read “object, when the preset condition is satisfied, is” or something else. Appropriate correction is required. In addition, claim(s) 12, 18 is/are objected to for the same reason. Claim(s) 1-18 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are objected to at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-18 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim(s) 1 recite(s) the limitation “the annotation label” (line 8). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to among “annotation labels”. It appears it may need to read “an annotation label”, or something else. For the purposes of examination, “an annotation label” is used. In addition, claim(s) 7, 13 is/are rejected for the same reason. Claim(s) 1 recite(s) the limitation “the second label” (line 10). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to among “second labels” (line 8). It appears it may need to read “a second label”, or something else. For the purposes of examination, “a second label” is used. In addition, claim(s) 7, 13 is/are rejected for the same reason. Claim(s) 2 recite(s) the limitation “the first association relationship” (line 8). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to among “first association relationships” (claim 1, line 12). It appears that it needs to read “a first association relationship” or something else. For the purposes of examination, “a first association relationship” is used. In addition, claims 8, 14 is/are rejected for the same reason. In addition, claim(s) 3 (line 2, line 5), 9, 15 is/are rejected for the same reason. In addition, claim(s) 4 (last line), 9, 15 is/are rejected for the same reason. In addition, claim(s) 5 (last line), 11, 17 is/are rejected for the same reason. Claim(s) 2 recite(s) the limitation “the annotation label and second label of the target object” (line 8). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “an annotation label and a second label of the target object” or something else. For the purposes of examination, “an annotation label and a second label of the target object” is used. In addition, claims 8, 14 is/are rejected for the same reason. Claim(s) 2 recite(s) the limitation “the updated fifth labels” (3rd last line). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “updated fifth labels” or something else. For the purposes of examination, “updated fifth labels” is used. In addition, claims 8, 14 is/are rejected for the same reason. Claim(s) 3 recite(s) the limitation “the relevant object data” (line 1). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to among “relevant object data of a target object” (claim 1, line 3), “relevant object data … of a plurality of first sample objects” (claim 1, line 7), or something else. It appears that it needs to read “relevant object data” or something else. For the purposes of examination, “relevant object data” is used. In addition, claims 9, 15 is/are rejected for the same reason. In addition, claims 4 (line 3), 10, 16 is/are rejected for the same reason. Claim(s) 4 recite(s) the limitation “the influences” (2nd last line). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “influences” or something else. For the purposes of examination, “influences” is used. In addition, claims 10, 16 is/are rejected for the same reason. Claim(s) 5 recite(s) the limitation “the first labels” (line 10). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “first labels” or something else. For the purposes of examination, “first labels” is used. In addition, claims 10, 16 is/are rejected for the same reason. Claim(s) 6 recite(s) the limitation “the various first sample objects” (line 4). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “various first sample objects” or something else. For the purposes of examination, “various first sample objects” is used. In addition, claims 11, 17 is/are rejected for the same reason. Claim(s) 6 recite(s) the limitation “the relevant object data of each first sample object” (line 5). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to since it is not clear how “each first sample object” is related to “the plurality of first sample objects” or “the various first sample objects”, or something else. It appears that it needs to read “relevant object data of each first sample object” or something else. For the purposes of examination, “relevant object data of each first sample object” is used. In addition, claims 11, 17 is/are rejected for the same reason. Claim(s) 6 recite(s) the limitation “the annotation label of each first sample object” (line 7). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to since it is not clear how “each first sample object” is related to “the plurality of first sample objects” or “the various first sample objects”, or something else. It appears that it needs to read “relevant object data of each first sample object” or something else. For the purposes of examination, “relevant object data of each first sample object” is used. In addition, claims 11, 17 is/are rejected for the same reason. Claim(s) 6 recite(s) the limitation “the following operations” (line 8). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “following operations” or something else. For the purposes of examination, “following operations” is used. In addition, claims 11, 17 is/are rejected for the same reason. Claim(s) 6 recite(s) the limitation “the third label of each first sample object” (line 9). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to since it’s not clear if “each first sample object” is of “the plurality of first sample objects” or not. It appears that it needs to read “a third label of each first sample object” or something else. For the purposes of examination, “a third label of each first sample object” is used. In addition, claims 11, 17 is/are rejected for the same reason. Claim(s) 6 recite(s) the limitation “the annotation labels and third labels of the various first sample objects” (line 12). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “annotation labels and third labels of the various first sample objects” or something else. For the purposes of examination, “annotation labels and third labels of the various first sample objects” is used. In addition, claims 11, 17 is/are rejected for the same reason. Claim(s) 6 recite(s) the limitation “the fourth labels of the various first sample objects” (line 15). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “fourth labels of the various first sample objects” or something else. For the purposes of examination, “fourth labels of the various first sample objects” is used. In addition, claims 11, 17 is/are rejected for the same reason. Claim(s) 6 recite(s) the limitation “the association relationships” (line 16). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears that it needs to read “association relationships” or something else. For the purposes of examination, “association relationships” is used. In addition, claims 11, 17 is/are rejected for the same reason. Claim(s) 1-18 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are rejected at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “An object recognition method …, the method comprising: …; predicting a first label of the target object … on the basis of the relevant object data of the target object, the first label representing an object type among a plurality of object types; … , …; determining first association relationships between the target object and the plurality of first sample objects according to the relevant object data of to the target object and the relevant object data of the plurality of first sample objects; and determining a second label of the target object according to the first label of the target object, the annotation label and second label and the corresponding first association relationship of each of the plurality of first sample objects as a recognition result of the target object”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“performed by an electronic device”, “by an object recognition model”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element(s) (“obtaining relevant object data of a target object”, “obtaining a reference data set”) – the act of receiving data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of receiving data is recited at a high-level of generality (i.e., as a generic act of receiving performing a generic act function of receiving data) such that it amounts no more than a mere act to apply the exception using a generic act of receiving. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“the reference data set comprising relevant object data and second labels of a plurality of first sample objects with annotation labels, the annotation label of one first sample object representing a real object type among the plurality of object types, and the second label of the first sample object representing a probability that the first sample object belongs to each of the plurality of object types”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). As discussed above, the claim recites the additional element(s) of receiving data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 2 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “taking the first label of the target object as an annotation label and an initial second label of the target object; performing at least one label propagation between the target object and the first sample object on the basis of the first association relationship according to the annotation label and second label of the target object and the annotation label and second label of the first sample object, and obtaining an updated fifth label of the target object and an updated fifth label of the first sample object; and fusing, according to the first association relationships, the updated fifth labels of the first sample objects having the first association relationships with the target object, to obtain the second label of the target object”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible. Regarding claim 3 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “…; and the determining a second label of the target object according to the first label of the target object, the annotation label and second label of each first sample object, and the first association relationship comprises: obtaining a weight corresponding to each type of association relationship; and determining the second label of the target object according to the first label of the target object, the annotation label and second label of each first sample object, each type of association relationship, and the weight corresponding to each type of association relationship”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the relevant object data comprises at least one type of relevant object data, and the first association relationship comprises a type of association relationship corresponding to each type of relevant object data”). This is a recitation of a particular type or source of data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 4 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “determining, for the plurality of first sample objects, an influence of the first sample object according to the relevant object data; and the determining a second label of the target object according to the first label of the target object, the annotation label and second label and the corresponding first association relationship of each of the plurality of first sample objects as a recognition result of the target object comprising: determining the second label of the target object according to the first label of the target object, the annotation label and second label of each first sample object, the influences of the target object and each first sample object, and the first association relationship”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible. Regarding claim 5 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “determining a proportion of a number of objects of each object type of the target object and the plurality of first sample objects according to the first label of the target object and the annotation label of each first sample object; and the determining a second label of the target object according to the first label of the target object, the annotation label and second label and the corresponding first association relationship of each of the plurality of first sample objects as a recognition result of the target object comprising: taking the proportion of the number of objects of each object type as a weight, weighting the first labels of the corresponding object type of the target object, and weighting the annotation labels of the corresponding object type of the plurality of first sample objects; and determining the second label of the target object according to a weighted first label of the target object, a weighted annotation label and a weighted second label of each first sample object, and the first association relationship”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible. Regarding claim 6 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a method; therefore, it falls into the statutory category of processes. Step 2A Prong 1: The limitations of “wherein … by: …, …; determining second association relationships between the various first sample objects in the second training data set according to the relevant object data of each first sample object; and taking the annotation label of each first sample object as an initial third label of the first sample object, repeatedly performing the following operations until updated third labels of the plurality of first sample objects satisfy a preset condition, and determining that the third label of each first sample object when the preset condition is satisfied is the second label of the first sample object: obtaining, on the basis of the second association relationships and the annotation labels and third labels of the various first sample objects, an updated fourth label of each first sample object by performing label propagation between the plurality of first sample objects; and fusing, for each first sample object according to the second association relationships, the fourth labels of the various first sample objects having the association relationships with the first sample object, to obtain a new third label of the first sample object”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element(s) (“the reference data set is obtained”, “obtaining a second training data set”) – the act of receiving data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of receiving data is recited at a high-level of generality (i.e., as a generic act of receiving performing a generic act function of receiving data) such that it amounts no more than a mere act to apply the exception using a generic act of receiving. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“the second training data set comprising the relevant object data of the plurality of first sample objects with the annotation labels”). This is a recitation of a particular type or source of data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the claim recites the additional element(s) of receiving data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. This is a recitation of a particular type or source of data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 7 The claim recites “An electronic device, comprising a memory, a processor, and a computer program stored on the memory that, when executed by the processor, causes the electronic device to perform an object recognition method including” to perform precisely the method of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1. Regarding claim 8 The claim is rejected for the reasons set forth in the rejection of Claim 2 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 9 The claim is rejected for the reasons set forth in the rejection of Claim 3 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 10 The claim is rejected for the reasons set forth in the rejection of Claim 4 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 11 The claim is rejected for the reasons set forth in the rejection of Claim 5 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 12 The claim is rejected for the reasons set forth in the rejection of Claim 6 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 13 The claim recites “A non-transitory computer-readable storage medium storing a computer program that, when executed by a processor of an electronic device, causes the electronic device to perform an object recognition method including:” to perform precisely the method of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1. Regarding claim 14 The claim is rejected for the reasons set forth in the rejection of Claim 2 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 15 The claim is rejected for the reasons set forth in the rejection of Claim 3 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 16 The claim is rejected for the reasons set forth in the rejection of Claim 4 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 17 The claim is rejected for the reasons set forth in the rejection of Claim 5 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 18 The claim is rejected for the reasons set forth in the rejection of Claim 6 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-10, 12-16, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (Collaboration Based Multi-Label Propagation for Fraud Detection) in view of Ye et al. (GPU-Accelerated Graph Label Propagation for Real-Time Fraud Detection) Regarding claim 1 (Note: Hereinafter, if a limitation has bold brackets (i.e. [·]) around claim languages, the bracketed claim languages indicate that they have not been taught yet by the current prior art reference but they will be taught by another prior art reference afterwards.) Wang teaches An object recognition method performed by an [electronic] device, the method comprising: (Wang [sec(s) 1] “To bridge this gap, we propose to assign each user multiple fraud labels. Moreover, since there are hundreds of millions of users, fully-annotation is impossible and only a few labeled data is available. Hence, it is formalized to a semi-supervised multi-label learning (SSML) problem. … Consequently, this paper proposes a novel collaboration based multi-label propagation method (CMLP) for largescale SSML fraud detection task.” [sec(s) 4] “The computations are performed on MaxCompute platform, a fast, distributed and fully hosted GB/TB/PB level data warehouse solution. We use three computation instances for time comparison and 3000 instances for performance comparison. Results Table 2 reports the transductive performance of all the methods on Taobao-FUD. From the results, we observe that HCMLP is the most successful method. In terms of Micro-F1, Macro-F1, and Example-F1, H-CMLP improves the best results of the baselines by 1.6%, 4.6%, 2.2% respectively. The results demonstrate that the proposed method can effectively address the fraud detection task because our collaboration technique can sufficiently exploit the label correlations. Table 4 shows the average running time on each iteration of H-CMLP and Vanilla-LP on three subgraphs. For H-CMLP, the running time has no obvious fluctuation. The reason is the size of U-I graph is really small, and the main computation is the scheduling procedure of MaxCompute. For Vanilla-LP, the U-U graph is much larger, and thus its running time grows quickly. By these observations, we conclude that our method can efficiently address the multi-label fraud detection task”;) obtaining relevant object data of a target object; (Wang [fig(s) 2] “An example of User-Item graph and the corresponding User-User graph. The U-I graph has 12 edges and the U-U graph has 24 edges. The popular item chocolate is linked to seven users, and thus the U-U graph contains a complete subgraph with seven nodes. The red lines in U-I graph represent fraud behaviours.” [sec(s) 3] “Yet, we proposed a generic correlation-aware propagation algorithm for ordinary SSML tasks. In effect, the graph can be collected in a variety of ways, e.g. k-nearest neighbor adjacency graph [Wang and Tsotsos, 2016], webpage links [Wu et al., 2014; Wu et al., 2015] and so on. In the e-commerce fraud detection scenario, our goal is to determine whether a user has some fraud behaviors. In general, malicious merchants will hire many fraud users to buy the same items. Therefore, a natural choice is to synthesize a user-user mapping (U-U graph) from the user-item bipartite graph (U-I graph), i.e. two users are connected to each other if they are interested in the same item. Though this strategy avoids building a graph explicitly, which is usually time-consuming, we observe that it gives a really dense graph in practice (Figure 2). In many e-commerce platforms, a popular item can be connected to millions of users. Thus, U-U graph may contain many complete subgraphs and the number of edges grows dramatically. As we discussed above, the main computation cost of CMLP lies in the propagation procedure, i.e. aggregating label information from adjacent nodes. Thus, it will be demanding in large-scale datasets.”;) predicting a first label of the target object by an object recognition model on the basis of the relevant object data of the target object, the first label representing an object type among a plurality of object types; (Wang [fig(s) 2] “An example of User-Item graph and the corresponding User-User graph. The U-I graph has 12 edges and the U-U graph has 24 edges. The popular item chocolate is linked to seven users, and thus the U-U graph contains a complete subgraph with seven nodes. The red lines in U-I graph represent fraud behaviours.” [sec(s) 1] “To bridge this gap, we propose to assign each user multiple fraud labels. Moreover, since there are hundreds of millions of users, fully-annotation is impossible and only a few labeled data is available. Hence, it is formalized to a semi-supervised multi-label learning (SSML) problem. … Consequently, this paper proposes a novel collaboration based multi-label propagation method (CMLP) for largescale SSML fraud detection task.” [sec(s) 3] “Yet, we proposed a generic correlation-aware propagation algorithm for ordinary SSML tasks. In effect, the graph can be collected in a variety of ways, e.g. k-nearest neighbor adjacency graph [Wang and Tsotsos, 2016], webpage links [Wu et al., 2014; Wu et al., 2015] and so on. In the e-commerce fraud detection scenario, our goal is to determine whether a user has some fraud behaviors. In general, malicious merchants will hire many fraud users to buy the same items. Therefore, a natural choice is to synthesize a user-user mapping (U-U graph) from the user-item bipartite graph (U-I graph), i.e. two users are connected to each other if they are interested in the same item. Though this strategy avoids building a graph explicitly, which is usually time-consuming, we observe that it gives a really dense graph in practice (Figure 2). In many e-commerce platforms, a popular item can be connected to millions of users. Thus, U-U graph may contain many complete subgraphs and the number of edges grows dramatically. As we discussed above, the main computation cost of CMLP lies in the propagation procedure, i.e. aggregating label information from adjacent nodes. Thus, it will be demanding in large-scale datasets.”;) obtaining a reference data set, the reference data set comprising relevant object data and second labels of a plurality of first sample objects with annotation labels, the annotation label of one first sample object representing a real object type among the plurality of object types, and the second label of the first sample object representing a probability that the first sample object belongs to each of the plurality of object types; (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 4] “We choose four real-world multi-label datasets from different task domains: 1) Medical [Pestian et al., 2007]: a text dataset contains clinical free texts, each of which is with 45 ICD9-CM labels, from CCHMC Department of Radiology. 2) Image [Wang et al., 2019]: a collection of 2, 000 images that are annotated by 5 labels. 3) Slashdot [Read et al., 2009]: a web text dataset collects 3, 782 technology-related news from 22 categories. 4) Eurlex-sm [Loza Menc’ia and Furnkranz, ¨ 2008]: a large text dataset contains 19, 348 legal documents about European Union law, having 201 subject matters tags.”; e.g., “soft label” read(s) on “probability”.) determining first association relationships between the target object and the plurality of first sample objects according to the relevant object data of to the target object and the relevant object data of the plurality of first sample objects; and (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)”;) determining a second label of the target object according to the first label of the target object, the annotation label and second label and the corresponding first association relationship of each of the plurality of first sample objects as a recognition result of the target object. (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)”;) However, Wang does not appear to explicitly teach: An object recognition method performed by an [electronic] device, the method comprising: (Note: Hereinafter, if a limitation has one or more bold underlines, the one or more underlined claim languages indicate that they are taught by the current prior art reference, while the one or more non-underlined claim languages indicate that they have been taught already by one or more previous art references.) Ye teaches An object recognition method performed by an electronic device, the method comprising: (Ye [sec(s) 1] “In this paper, we propose a GPU-based framework to support large scale LP processing, called GLP. To ease the development of different LP variants, we offer a set of user-defined APIs. These APIs provide expressive and bulk-synchronize abstractions for data engineers to quickly deploy LP variants tailored for their targeted applications, e.g., develop various fraud detection algorithms to enhance the system capability of detecting new frauds in e-commerce platforms, without domain knowledge of GPUs. Furthermore, the design of GLP can seamlessly support massive graphs that do not fit into the GPU memory entirely, and GLP handles such scenarios with a CPU-GPU hybrid execution mode”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Wang with the electronic device of Ye. One of ordinary skill in the art would have been motived to combine in order to provide performance improvement and achieve a speedup on average. (Ye [sec(s) 5] “We also note that, as the graph size exceeds the GPU memory, GLP switches to the CPUGPU heterogeneous mode but the memory transfer overhead is less than 10% of the overhead running time (included in the elapsed time). In addition to performance improvement, the monetary of deploying GLP is also significantly lower than the in-house solution. The CPUs used in each machine of the in-house solution cost 5890(CPU ) ∗ 4 = 23560 dollars, whereas the CPU-GPU setup used in GLP costs 617(CPU ) + 2999(GPU ) = 3616 dollars, according to official prices from Intel and NVIDIA. Hence, GLP can also provide substantial monetary savings with one machine to handle the current detention workload efficiently. To verify the efficiency of GLP on a multi-GPU environment, we add one more NVIDIA Titan V GPU to the same single-machine setting. With two GPUs, GLP further achieves 1.8x speedup on average, as shown in Figure 7.”) Regarding claim 2 The combination of Wang, Ye teaches claim 1. wherein the determining a second label of the target object according to the first label of the target object, the annotation label and second label and the corresponding first association relationship of each of the plurality of first sample objects as a recognition result of the target object comprises: (See claim 1) Wang further teaches taking the first label of the target object as an annotation label and an initial second label of the target object; (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”; e.g., “updates the labels of one certain node by aggregating its neighbor’s label information” and “soft label propagation algorithm” read(s) on “taking the first label of the target object as an annotation label and an initial second label of the target object”.) performing at least one label propagation between the target object and the first sample object on the basis of the first association relationship according to the annotation label and second label of the target object and the annotation label and second label of the first sample object, and obtaining an updated fifth label of the target object and an updated fifth label of the first sample object; and (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”;) fusing, according to the first association relationships, the updated fifth labels of the first sample objects having the first association relationships with the target object, to obtain the second label of the target object. (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”; e.g., “updates the labels of one certain node by aggregating its neighbor’s label information” read(s) on “fusing, according to the first association relationships, the updated fifth labels of the first sample objects”.) Regarding claim 3 The combination of Wang, Ye teaches claim 1. Wang further teaches wherein the relevant object data comprises at least one type of relevant object data, and the first association relationship comprises a type of association relationship corresponding to each type of relevant object data; and (Wang [fig(s) 2] “An example of User-Item graph and the corresponding User-User graph. The U-I graph has 12 edges and the U-U graph has 24 edges. The popular item chocolate is linked to seven users, and thus the U-U graph contains a complete subgraph with seven nodes. The red lines in U-I graph represent fraud behaviours.” [sec(s) 3] “Yet, we proposed a generic correlation-aware propagation algorithm for ordinary SSML tasks. In effect, the graph can be collected in a variety of ways, e.g. k-nearest neighbor adjacency graph [Wang and Tsotsos, 2016], webpage links [Wu et al., 2014; Wu et al., 2015] and so on. In the e-commerce fraud detection scenario, our goal is to determine whether a user has some fraud behaviors. In general, malicious merchants will hire many fraud users to buy the same items. Therefore, a natural choice is to synthesize a user-user mapping (U-U graph) from the user-item bipartite graph (U-I graph), i.e. two users are connected to each other if they are interested in the same item. Though this strategy avoids building a graph explicitly, which is usually time-consuming, we observe that it gives a really dense graph in practice (Figure 2). In many e-commerce platforms, a popular item can be connected to millions of users. Thus, U-U graph may contain many complete subgraphs and the number of edges grows dramatically. As we discussed above, the main computation cost of CMLP lies in the propagation procedure, i.e. aggregating label information from adjacent nodes. Thus, it will be demanding in large-scale datasets.”;) the determining a second label of the target object according to the first label of the target object, the annotation label and second label of each first sample object, and the first association relationship comprises: (See claim 1) Wang further comprising: obtaining a weight corresponding to each type of association relationship; and (Wang [fig(s) 2] [sec(s) 3.1] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5) … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution.”;) determining the second label of the target object according to the first label of the target object, the annotation label and second label of each first sample object, each type of association relationship, and the weight corresponding to each type of association relationship. (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”;) Regarding claim 4 The combination of Wang, Ye teaches claim 1. Wang further teaches further comprising: determining, for the plurality of first sample objects, an influence of the first sample object according to the relevant object data; and (Wang [fig(s) 2] [sec(s) 3.1] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”; e.g., “weight” read(s) on “influence”.) the determining a second label of the target object according to the first label of the target object, the annotation label and second label and the corresponding first association relationship of each of the plurality of first sample objects as a recognition result of the target object comprising: (See claim 1) Wang further comprising: determining the second label of the target object according to the first label of the target object, the annotation label and second label of each first sample object, the influences of the target object and each first sample object, and the first association relationship. (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”;) Regarding claim 6 The combination of Wang, Ye teaches claim 1. Wang further teaches wherein the reference data set is obtained by: obtaining a second training data set, the second training data set comprising the relevant object data of the plurality of first sample objects with the annotation labels; (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 4] “We choose four real-world multi-label datasets from different task domains: 1) Medical [Pestian et al., 2007]: a text dataset contains clinical free texts, each of which is with 45 ICD9-CM labels, from CCHMC Department of Radiology. 2) Image [Wang et al., 2019]: a collection of 2, 000 images that are annotated by 5 labels. 3) Slashdot [Read et al., 2009]: a web text dataset collects 3, 782 technology-related news from 22 categories. 4) Eurlex-sm [Loza Menc’ia and Furnkranz, ¨ 2008]: a large text dataset contains 19, 348 legal documents about European Union law, having 201 subject matters tags.”;) determining second association relationships between the various first sample objects in the second training data set according to the relevant object data of each first sample object; and (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)”;) taking the annotation label of each first sample object as an initial third label of the first sample object, repeatedly performing the following operations until updated third labels of the plurality of first sample objects satisfy a preset condition, and determining that the third label of each first sample object when the preset condition is satisfied is the second label of the first sample object: (Wang [fig(s) 2] [sec(s) 3.1] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5) … When the iterations end, we have to transform the output to final prediction.” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”;) obtaining, on the basis of the second association relationships and the annotation labels and third labels of the various first sample objects, an updated fourth label of each first sample object by performing label propagation between the plurality of first sample objects; and (Wang [fig(s) 2] [sec(s) 3.1] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5) … When the iterations end, we have to transform the output to final prediction.” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”;) fusing, for each first sample object according to the second association relationships, the fourth labels of the various first sample objects having the association relationships with the first sample object, to obtain a new third label of the first sample object. (Wang [fig(s) 2] [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”; e.g., “updates the labels of one certain node by aggregating its neighbor’s label information” read(s) on “fusing, for each first sample object according to the second association relationships, the fourth labels of the various first sample objects”.) Regarding claim 7 The claim is a system claim corresponding to the method claim 1, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 8 The claim is a system claim corresponding to the method claim 2, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 9 The claim is a system claim corresponding to the method claim 3, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 10 The claim is a system claim corresponding to the method claim 4, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 12 The claim is a system claim corresponding to the method claim 6, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 13 The claim is a computer-readable storage medium claim corresponding to the method claim 1, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 14 The claim is a computer-readable storage medium claim corresponding to the method claim 2, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 15 The claim is a computer-readable storage medium claim corresponding to the method claim 3, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 16 The claim is a computer-readable storage medium claim corresponding to the method claim 4, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 18 The claim is a computer-readable storage medium claim corresponding to the method claim 6, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Claim(s) 5, 11, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (Collaboration Based Multi-Label Propagation for Fraud Detection) in view of Ye et al. (GPU-Accelerated Graph Label Propagation for Real-Time Fraud Detection) in view of Liu et al. (Pick and Choose: A GNN-based Imbalanced Learning Approach for Fraud Detection) Regarding claim 5 The combination of Wang, Ye teaches claim 1. Wang further teaches further comprising: determining a [proportion] of a number of objects of each object type of the target object and the plurality of first sample objects according to the first label of the target object and the annotation label of each first sample object; and (Wang [fig(s) 2] [sec(s) 1] “To bridge this gap, we propose to assign each user multiple fraud labels. Moreover, since there are hundreds of millions of users, fully-annotation is impossible and only a few labeled data is available. Hence, it is formalized to a semi-supervised multi-label learning (SSML) problem. … Consequently, this paper proposes a novel collaboration based multi-label propagation method (CMLP) for largescale SSML fraud detection task.” [sec(s) 3] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. … By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) [sec(s) 3.2] “In the e-commerce fraud detection scenario, our goal is to determine whether a user has some fraud behaviors. … As we discussed above, the main computation cost of CMLP lies in the propagation procedure, i.e. aggregating label information from adjacent nodes. Thus, it will be demanding in large-scale datasets.”;) the determining a second label of the target object according to the first label of the target object, the annotation label and second label and the corresponding first association relationship of each of the plurality of first sample objects as a recognition result of the target object comprising: (See claim 1) taking the [proportion] of the number of objects of each object type as a weight, weighting the first labels of the corresponding object type of the target object, and weighting the annotation labels of the corresponding object type of the plurality of first sample objects; and (Wang [fig(s) 2] [sec(s) 3.1] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5)” [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”;) determining the second label of the target object according to a weighted first label of the target object, a weighted annotation label and a weighted second label of each first sample object, and the first association relationship. (Wang [fig(s) 2] [sec(s) 3.1] “We denote the instance matrix by X ∈ Rn×p. The target matrix of labeled data is denoted by Y ∈ {−1, +1}l×q (q << n). Given an undirected graph G = <E, V, W>, vanilla label propagation method iteratively updates the labels of one certain node by aggregating its neighbor’s label information. Here E, V are edge, vertex sets, respectively. W = [wij]n×n is a non-negative weight matrix. Let P = D−1/2WD−1/2 be the propagation matrix by normalizing the columns of W, where D = diag[d1, d2, ..., dn] is a diagonal matrix with PNG media_image1.png 88 411 media_image1.png Greyscale . … That is, the final output absorbs the prediction of other labels in a collaborative fashion. By regarding the ground-truth of labeled data as final prediction, CMLP estimates R by PNG media_image2.png 215 1207 media_image2.png Greyscale (3) … Updating F With Fixed Z When Z is fixed, the remaining items constitute an objective of soft label propagation algorithm with independent label Z. We first derive the gradient by, PNG media_image3.png 107 759 media_image3.png Greyscale (5) [sec(s) 3.2] “ PNG media_image4.png 191 954 media_image4.png Greyscale (11) In words, we expect that more sharing items lead to larger weight, while more popular item results in smaller contribution. … Time Complexity According to Eq. (12), original CMLP takes O(2q|E|) time, i.e. summing up the degree number of all the nodes, to update the labels of each node.”;) However, the combination of Wang, Ye does not appear to explicitly teach: determining a [proportion] of a number of objects of each object type of the target object and the plurality of first sample objects according to the first label of the target object and the annotation label of each first sample object; and taking the [proportion] of the number of objects of each object type as a weight. Liu teaches determining a proportion of a number of objects of each object type of the target object and the plurality of first sample objects according to the first label of the target object and the annotation label of each first sample object; and (Li [fig(s) 1] [sec(s) 1] “To address the above challenges, in this paper, we propose a GNN-based imbalanced learning approach for graph-based fraud detection. For the algorithm side’s challenge, we design a label-balanced sampler to pick nodes and edges to train. The probability assigned for each node is inversely proportional to its label frequency so that nodes of the minority class are more likely to be picked. As a result, the induced sub-graph of the picked nodes would have a balanced label distribution. For the challenges of the application side, we propose a neighborhood sampler to choose neighbors with a learnable parameterized distance function. For the fraud target node, the redundant links could be filtered by choosing neighbors that are far from the target measured by the distance and removing them from the neighbor set. And the necessary links which are beneficial for fraud prediction would be created by choosing similar nodes of the fraud class and regarding them as neighbors. We integrate the above two stages of graph sampling and neighbor selection into general GNN frameworks and name our model as Pick and Choose Graph Neural Network (PC-GNN). Our contribution could be listed as follows. • We formulate the graph-based fraud detection problem as an imbalanced node classification task and propose a GNN-based imbalanced learning approach to resolve the class imbalance problem on graphs. • We design a label-balanced sampler to pick nodes and edges for sub-graph training and a neighborhood sampler to choose neighbors for over-sampling the neighborhood of the minority class and under-sampling the neighborhood of the majority class.”;) taking the proportion of the number of objects of each object type as a weight. (Li [fig(s) 1] [sec(s) 1] “To address the above challenges, in this paper, we propose a GNN-based imbalanced learning approach for graph-based fraud detection. For the algorithm side’s challenge, we design a label-balanced sampler to pick nodes and edges to train. The probability assigned for each node is inversely proportional to its label frequency so that nodes of the minority class are more likely to be picked. As a result, the induced sub-graph of the picked nodes would have a balanced label distribution. For the challenges of the application side, we propose a neighborhood sampler to choose neighbors with a learnable parameterized distance function. For the fraud target node, the redundant links could be filtered by choosing neighbors that are far from the target measured by the distance and removing them from the neighbor set. And the necessary links which are beneficial for fraud prediction would be created by choosing similar nodes of the fraud class and regarding them as neighbors. We integrate the above two stages of graph sampling and neighbor selection into general GNN frameworks and name our model as Pick and Choose Graph Neural Network (PC-GNN). Our contribution could be listed as follows. • We formulate the graph-based fraud detection problem as an imbalanced node classification task and propose a GNN-based imbalanced learning approach to resolve the class imbalance problem on graphs. • We design a label-balanced sampler to pick nodes and edges for sub-graph training and a neighborhood sampler to choose neighbors for over-sampling the neighborhood of the minority class and under-sampling the neighborhood of the majority class.”; e.g., “design a label-balanced sampler to pick nodes and edges to train” along with “balanced label distribution” read(s) on “weight”.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Wang with the proportion for a weight of Ye. One of ordinary skill in the art would have been motived to combine in order to provide better performances compared to the conventional approaches. (Ye [sec(s) 4] “experimental results demonstrate that PC-GNN outperforms these two methods, with 3%~5% improvement in AUC and 0.7%~28% improvement in GMean. The F1-scores of PC-GNN and CARE-GNN are comparable and higher than that of GraphConsis. CARE-GNN performs better than GraphConsis in all these metrics which is consistent with the conclusion in the paper of CARE-GNN”) Regarding claim 11 The claim is a system claim corresponding to the method claim 5, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Regarding claim 17 The claim is a computer-readable storage medium claim corresponding to the method claim 5, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEHWAN KIM whose telephone number is (571)270-7409. The examiner can normally be reached Mon - Fri 9:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEHWAN KIM/Examiner, Art Unit 2129 1/17/2026
Read full office action

Prosecution Timeline

May 10, 2023
Application Filed
Jan 17, 2026
Non-Final Rejection — §101, §103, §112
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602595
SYSTEM AND METHOD OF USING A KNOWLEDGE REPRESENTATION FOR FEATURES IN A MACHINE LEARNING CLASSIFIER
2y 5m to grant Granted Apr 14, 2026
Patent 12602580
Dataset Dependent Low Rank Decomposition Of Neural Networks
2y 5m to grant Granted Apr 14, 2026
Patent 12602581
Systems and Methods for Out-of-Distribution Detection
2y 5m to grant Granted Apr 14, 2026
Patent 12602606
APPARATUSES, COMPUTER-IMPLEMENTED METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMPROVED GLOBAL QUBIT POSITIONING IN A QUANTUM COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12541722
MACHINE LEARNING TECHNIQUES FOR VALIDATING AND MUTATING OUTPUTS FROM PREDICTIVE SYSTEMS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+65.6%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 144 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month