DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Arguments
Applicant’s arguments, see pp. 1-3, filed 09 February 2026, with respect to the rejection(s) of claim(s) 1, 4, 6-8, 11, 13-15, 18 and 20 under 35 U.S.C. 103 as being unpatentable over Komoto et al (US PG Pub. No. 2011/0255748) in view of Kohita et al (US PG Pub. No. 2022/0076138) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Bondugula et al (US PG Pub. No. 2021/0357679), which was previously cited in the non-final rejection in the reasons for allowance of claims 2, 9 and 16. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL.
Allowable Subject Matter
Claims 15-18 and 20 are allowed.
Claims 2-3, 5, 9-10 and 12, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
With regards to claims 2, 9 and 16, several of the features of these claims were known in the art as evidenced by the combination of Komoto et al (US PG Pub. No. 2011/0255748) in view of Kohita et al (US PG Pub. No. 2022/0076138), which renders obvious the limitations of parent claims 1, 8 and 15, respectively. In particular, Kohita discloses a target identification network model (“SVCL”) to obtain an identification result (e.g., “clustering”) that determines a class to which the target to be identified belongs, wherein the target identification network model comprises a loss function that is based on intra-class constraints (e.g., “intra-class distance loss” or “
l
(
k
)
i
n
t
r
a
”) and inter-class constraints (e.g., “inter-class distance loss” or “
l
(
i
,
j
)
i
n
t
e
r
”), the intra-class constraints (e.g., eqn. 4) are configured to constrain an intra-class distance between sample image features (e.g., “v”) of a sample target and a class center (e.g., “centroid vector (c(k))”) of a class (“k-th class”) to which the sample target belongs, and the inter-class constraints (e.g., eqn. 6) are configured to constrain inter-class distances between class centers of different classes (e.g., “c(i)” and “c(j)”) at: ¶¶ [0042]-[0044]. However, neither Komoto nor Kohita discloses the recited inter-class constraint equation; i.e.,
PNG
media_image1.png
42
292
media_image1.png
Greyscale
A portion of the equation recited by applicant was known in the art as evidenced by Bondugula et al (US PG Pub. No. 2021/0357679) at ¶¶ [0061]-[0064]. In particular, Bondugula discloses a cosine similarity measure, also known in the art as an “inner product” at equation 1. However, although the inter-class constraint equation recited by applicant includes the same cosine similarity measure, it further divides that measure by “K(K-1)”, wherein K is the number of classes. None of the references of record recite the “K(K-1)” divisor.
With regards to claims 3, 5, 10, 12 and 15, several of the features of these claims were known in the art as evidenced by the combination of Komoto et al (US PG Pub. No. 2011/0255748) in view of Kohita et al (US PG Pub. No. 2022/0076138), which renders obvious the limitations of parent claims 1, 8 and 15, respectively. In particular, Kohita discloses a target identification network model (“SVCL”) to obtain an identification result (e.g., “clustering”) that determines a class to which the target to be identified belongs, wherein the target identification network model comprises a loss function that is based on intra-class constraints (e.g., “intra-class distance loss” or “
l
(
k
)
i
n
t
r
a
”) and inter-class constraints (e.g., “inter-class distance loss” or “
l
(
i
,
j
)
i
n
t
e
r
”), the intra-class constraints (e.g., eqn. 4) are configured to constrain an intra-class distance between sample image features (e.g., “v”) of a sample target and a class center (e.g., “centroid vector (c(k))”) of a class (“k-th class”) to which the sample target belongs, and the inter-class constraints (e.g., eqn. 6) are configured to constrain inter-class distances between class centers of different classes (e.g., “c(i)” and “c(j)”) at: ¶¶ [0042]-[0044]. However, neither Komoto nor Kohita discloses the recited inter-class constraint equation; i.e.,
PNG
media_image2.png
30
328
media_image2.png
Greyscale
A portion of the equation recited by applicant was known in the art as evidenced by eqn. 6 of the Kohita reference at ¶ [0044]. In particular, applicant’s recited “α-D(Ci,Cj)” reads upon the “
l
(
k
)
i
n
t
r
a
-
c
(
i
)
-
c
(
j
)
taught by Kohita in eqn. 6. However, although the inter-class constraint equation recited by applicant includes the same distance measure, it further divides that measure by “K(K-1)”, wherein K is the number of classes. None of the references of record recite the “K(K-1)” divisor.
With regards to claims 16-18 and 20, these claims depend from claim 15 and therefore incorporate the features of that claim that were found allowable.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4, 6-8, 11, 13-14 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Komoto et al (US PG Pub. No. 2011/0255748) in view of Kohita et al (US PG Pub. No. 2022/0076138), and in further view of Bondugula et al (US PG Pub. No. 2021/0357679).
With regards to claim 1, the limitations of this claim are obvious over the teachings of the prior art, as evidenced by the following references:
The Komoto reference
Komoto discloses obtaining an image containing a target to be identified at: ¶ [0084]; ¶ [0101].
Komoto discloses performing feature extraction on the image to obtain image features in the image at: ¶ [0085]; ¶ [0103].
Komoto discloses inputting the image features into a target identification network model (“subclass classification unit 105”) to obtain an identification result (e.g., “clustering”) that determines a class to which the target to be identified belongs at: ¶ [0087]; ¶¶ [0118]-[0119]; ¶¶ [0123]-[0126] and FIGS. 7-8. However, Komoto does not specify the target identification network model (“subclass classification unit 105”) comprises a loss function that is based on intra-class constraints and inter-class constraints. However, this limitation was known in the art as evidenced by the Kohita reference.
The Kohita reference
Kohita discloses a target identification network model (“SVCL”) to obtain an identification result (e.g., “clustering”) that determines a class to which the target to be identified belongs, wherein the target identification network model comprises a loss function that is based on intra-class constraints (e.g., “intra-class distance loss” or “
l
(
k
)
i
n
t
r
a
”) and inter-class constraints (e.g., “inter-class distance loss” or “
l
(
i
,
j
)
i
n
t
e
r
”), the intra-class constraints (e.g., eqn. 4) are configured to constrain an intra-class distance between sample image features (e.g., “v”) of a sample target and a class center (e.g., “centroid vector (c(k))”) of a class (“k-th class”) to which the sample target belongs, and the inter-class constraints (e.g., eqn. 6) are configured to constrain inter-class distances between class centers of different classes (e.g., “c(i)” and “c(j)”) at: ¶¶ [0042]-[0044]. At the time of the filing of the present application, it would have been obvious to a person of ordinary skill in the art use a target identification network model (“SVCL”) to constrain clusters using a loss function that is based on intra-class constraints and inter-class constraints, as taught by Kohita, when clustering to obtain an identification result (e.g., “clustering”) that determines a class, as taught by Komoto. The motivation for doing so comes from Kohita, which discloses, “The distributions of state vectors in the tests with SVCL had more condensed clusters, and, in particular, the distributions in the test with p=l0 was more optimal for the decision tree. Thus, an If' -norm with a larger p value in SVCL distributes state vectors in a same class close to each other and distributes centroids of different classes far away from each other.” (¶ [0052]). Therefore, it would have been obvious to combine Kohita with Komoto to obtain the invention specified in this claim.
The Bondugula reference
Bondugula discloses inter-class constraints configured to constrain inter-class distances between class centers of different classes and inter-class angles between the class centers of different classes at ¶¶ [0059]-[0064]. At the time of the filing of the present application, it would have been obvious to a person of ordinary skill in the art to constrain inter-class angles between the class centers of different classes, as taught by Bondugula, when constraining inter-class distances between class centers of different classes, as taught by Kohita. The motivation for doing so comes from Bondugula, which discloses, “In the above-described processes for selecting a cluster and splitting the cluster, distances between data points in the cluster and between the data points and the centroid of the cluster are calculated. The distances can be measured, for example, using Euclidean distances. However, for data with a high dimension (e.g., higher than 10), Euclidean distance can lose the ability to adequately separate points in this high dimension space. This can be more problematic for data having a dimension as high as 100 or even 1000. To address this issue, different distance measurements can be utilized.” (¶ [0060]). Therefore, it would have been obvious to combine Bondugula with Kohita and Komoto to obtain the invention specified in this claim.
With regards to claim 4, Kohita discloses the intra-class constraints are expressed as follows:
PNG
media_image3.png
28
132
media_image3.png
Greyscale
, embodied in Kohita in eqn. 4 as
PNG
media_image4.png
40
114
media_image4.png
Greyscale
, where N (“n(k)”) represents a total number of sample images, fp (“v”) represents sample image features of a p-th sample image, Cp (“c(k)”) represents the class center of the class to which the sample target of the p-th sample image belongs, and D(fp, Cp) (“
c
(
k
)
-
v
”) represents a distance between fp (“v”) and Cp (“c(k)”) at ¶¶ [0042]-[0043].
With regards to claim 6, Komoto discloses obtaining a plurality of sample images of the sample target at ¶ [0084]; ¶ [0101]
Kohita discloses, with a goal of minimizing the loss function, training a to-be-trained identification network model iteratively using samples until the loss value of the loss function is less than or equal to a preset loss value threshold (i.e., lower than any other) at ¶¶ [0045]-[0048]. The motivation for the combination is the same as previously presented.
With regards to claim 7, Kohita discloses training the to-be-trained identification network rnodel iteratively using the sample images comprises: iteratively update the class center of the class to which the sample target belongs at ¶ [0042]. The motivation for the combination is the same as previously presented.
With regards to claim 8, Komoto discloses one or more processors; and a memory coupled to the one or rnore processors, the memory storing programs that, when executed by the one or more processors, cause performance of its operations at ¶¶ [0096]-[0097]. The steps performed by the apparatus of this claim are obvious over the combination of Komoto and Kohita for the same reasons as were provided in the discussion of claim 1, which recites a method performing these same steps.
With regards to claim 11, the steps performed by the apparatus of this claim are obvious over the combination of Komoto and Kohita for the same reasons as were provided in the discussion of claim 4, which recites a method performing these same steps.
With regards to claim 13, the steps performed by the apparatus of this claim are obvious over the combination of Komoto and Kohita for the same reasons as were provided in the discussion of claim 6, which recites a method performing these same steps.
With regards to claim 14, the steps performed by the apparatus of this claim are obvious over the combination of Komoto and Kohita for the same reasons as were provided in the discussion of claim 7, which recites a method performing these same steps.
With regards to claim 21, Kohita discloses a loss value of the loss function is positively correlated with the intra-class distance (e.g., “intra-class distance loss” or “
l
(
k
)
i
n
t
r
a
” increases as the distance increases in eqn. 4) and negatively correlated with the inter-class distances (e.g., “inter-class distance loss” or “
l
(
i
,
j
)
i
n
t
e
r
” decreases as the distance increases in eqn. 6) at ¶¶ [0043]-[0044]. The motivation for the combination is the same as previously presented.
Bondugula discloses an inter-class distance comprising inter-class angles between the class centers of different classes at ¶¶ [0059]-[0064]. The motivation for the combination is the same as previously presented.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID F DUNPHY whose telephone number is (571)270-1230. The examiner can normally be reached 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID F DUNPHY/Primary Examiner, Art Unit 2673