Prosecution Insights
Last updated: April 19, 2026
Application No. 18/435,265

SYSTEM AND METHOD FOR CLASSIFYING NOVEL CLASS OBJECTS

Non-Final OA §101§102§103§112
Filed
Feb 07, 2024
Examiner
SHIMELES, BEZAWIT NOLAWI
Art Unit
2673
Tech Center
2600 — Communications
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
13 currently pending
Career history
14
Total Applications
across all art units

Statute-Specific Performance

§101
17.4%
-22.6% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/09/2024 is being considered by the examiner. Claim Objections Claims 1-8 are objected to because of the following informalities: In claim 1, line 5, the term “during the training of the novel classifier” should read “during a training of the novel classifier” as the term is being introduced for the first time with no earlier recitation or limitation of a training. In claim 2, line 1, the term “wherein in (a) above” should read “wherein (a) further comprises.” In claim 3, line 1, the term “wherein in (a) above” should read “wherein (a) further comprises.” In claim 4, line 1, the term “wherein in (b) above” should read “wherein (b) further comprises.” In claim 5, line 2, the term “wherein in (c) above” should read “wherein (c) further comprises.” In claim 5, line 2, the term “for a novel class” should read “for a novel class object” in order to be more specific. In claim 5, line 3, the term “a feature map (hereinafter, referred to as “first feature map”) is obtained” should read "a first feature map In claim 5, line 5, the term “a feature map in a feature pyramid network (hereinafter, referred to as "FPN feature map") is obtained” should read “a feature map in a feature pyramid network (FPN) Claim 5 and associated dependent claims are being objected to because the recitation of the limitation “it” is a pronoun with no direct object and therefore, is difficult to determine whether “it” refers to the object recognition, the novel class, or another claimed limitation derived from its independent claim 1. Please see claim 5, line 2. For examination purposes, the office has interpreted the term “it” in claim 5 as “a novel class”. In claim 6, line 1, the term “wherein in (c) above” should read “wherein (c) further comprises.” In claim 7, line 1, the term “wherein in (c) above” should read “wherein (c) further comprises.” In claim 8, line 1, the term “wherein in (c) above” should read “wherein (c) further comprises.” Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claims 9 recite limitations that use words like “means” (or “step”) or similar terms with functional language and do invoke 35 U.S.C. 112(f): Claim 9; recites the limitation, “an input interface device configured to receive…,” [Line 2]. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. After a careful analysis, as disclosed above, and a careful review of the specification the following limitations in claim 9: “an input interface device” (Fig. 4, #1050, Paragraph [0087] – “Referring to FIG. 4, the computer system, which is designated by reference numeral 1000, may include at least one of a processor 1010, a memory 1030, an input interface device 1050, an output interface device 1060, and a storage device 1040 that communicate with each other through a bus 1070.” Thus, an input interface device does not have a sufficient structure associated with it. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 9, along with its dependent claims 10-16, are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As described above, the disclosure does not provide adequate structure to perform the claimed function in the recited limitations. Claim 9 recites limitations: Claim 9; recites the limitation, “an input interface device configured to receive…,” [Line 2]. The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 9, along with its dependent claims 10-16, are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites limitations: Claim 9; recites the limitation, “an input interface device configured to receive…,” [Line 2]. Claim 9 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification is devoid of adequate structure to perform the claimed functions. The specification does not provide sufficient details such that one of ordinary skill in the art would understand which structure performed(s) the claimed function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Examiner notes the following: Regarding the reference “classifier” in the claims – although the claims do not explicitly recite it with language of explicitly “machine learning model,” the term “classifier” is widely used to refer to a machine learning model. Furthermore, the claims recite “learning” and “training steps” for the classifier, and thus, the examiner interprets the term to be a machine learning model. Claims 1-16 are rejected under 35 U.S.C. 101 because: Regarding Independent Claim 1 and its dependent claims 2-8, claim 1 is directed to a process, which falls within one of the four statutory categories. Claim 1 recites, in part: “constructing a novel classifier considering prior knowledge acquired from a base classifier; and learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier.” The limitations as drafted, are processes that, under broadest reasonable interpretation, covers the performance of the limitation in the mind which falls within the “Mathematical Concept” grouping of abstract ideas. The limitation of “constructing a novel classifier considering prior knowledge acquired from a base classifier” is a step, under BRI, that is a series of mathematical operations recited at a high level of generality; these are concepts put together related to machine learning algorithms that are well-known in the art. “Considering prior knowledge…” is a mental process abstract idea – while constructing the novel classifier through math, the human mind can also observe further given conditions/data and consider them for calculations. The limitation of "and learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier" is a step, under BRI, is a series of mathematical operations recited at a high level of generality; these are concepts put together related to machine learning algorithms that are well-known in the art. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the following additional element(s): “a method of classifying novel objects.” The additional element is part of the preamble indicating an intended purpose which merely describes the abstract idea itself without further adding a functional limitation. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Please see MPEP §2106.04.(d).III.C. There are no additional elements, such as for these additional elements as indicated above, that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea. For all of the foregoing reasons, claim 1 does not comply with the requirements of 35 USC 101. Accordingly, the dependent claims 2-8 do not provide elements that overcome the deficiencies of the independent claim 1. Claim 2 recites, in part, “wherein in (a) above, a parameter of the base classifier previously learned for a set of base classes is considered as the prior knowledge,” which includes a wherein clause giving further specification of the type of data used as prior knowledge, without further integrating the abstract ideas into a practical application, nor considered significantly more. Claim 3 recites, in part, “wherein in (a) above, a preset number of Gaussian random vectors is used as additional basis vectors to construct the novel classifier capable of expressing any novel class,” which includes a wherein clause giving further specification that adds “Gaussian random vectors” to the element it depends on; “gaussian random vectors” is a well-known generic term in the field of neural networks, recited at a high level of generality to perform a generic function “to construct the novel classifier capable of expressing any novel class”. Therefore, the claim elements are not an indication of an integration of the abstract ideas into a practical application, nor considered significantly more. Claim 4 recites, in part, “wherein in (b) above, a parameter of the novel classifier model is parameterized with the weight coefficient,” which includes a wherein clause giving further specification that adds mathematical optimization steps to the element it depends on without integration into a practical application; “and then the weight coefficient is updated to streamline the training of the novel classifier” further recites a routine step of updating parameters during training and thus does not limit how the model works to arrive at such an outcome. Claim 5 recites, in part, “further comprising (c) performing object recognition for a novel class by applying it to an object recognition model,” which merely further specifies the scope of the abstract idea by introducing an additional conventional technique within machine learning of performing a generic function of object recognition through the additional element of “an object recognition model.” The claim further recites additional limitations that are clauses of merely further specification of the element that each of them depend on; here, the model comprises of generic machine learning model components recited at a high level of generality such as a feature map, a feature pyramid network, and an output of object recognition. Claims 6-8 recite, in part, wherein clauses of merely further specification of the element that each of them depends on, and therefore are not indications of an integration of the abstract ideas into a practical application, nor considered significantly more. Accordingly, the dependent claims 2-8 are not patent eligible under 35 U.S.C. 101. Regarding independent claim 9, and its dependent claims 10-16: The independent claim 9 recites analogous information to the independent claim 1, hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons above in the claim 1 analysis. Furthermore, claim 9 recites some additional features such as, “an input interface device configured to… a memory configured to store a program… and a processor configured to execute a program…,” which are features of generic computers and computer components recited at a high level of generality to perform generic well-known functions such as a processor processing instruction stored in a memory, etc. The dependent claims 10-16 each recite analogous limitations to the dependent claims 2-8, hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons above in the analysis above. Claim Rejections - 35 USC § 102. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 4, 9-10, and 12 are rejected under 35 U.S.C.102(a)(1)/(a)(2) as being anticipated by SHI et al. (US 20210012226 A1), herein after referenced as SHI. Regarding claim 1, SHI teaches a method of classifying novel class objects (Fig. 7-8, Paragraphs [0049-0050] - SHI discloses FIG. 7 shows an example computer-implemented method of adapting a base classifier to one or more novel classes in accordance with the present invention. SHI further discloses FIG. 8 shows an example computer-implemented method of learning a set of transformation parameters in accordance with the present invention), comprising: (a) constructing a novel classifier (Fig. 8, step 830 called constructing base classifier, Paragraph [0127] - SHI discloses method 800 may comprise, in an operation titled “CONSTRUCTING BASE CLASSIFIER”, constructing 830 a base classifier configured to classify instances into the one or more base classes based on the class representations of the one or more base classes and the parameters of the feature extractor) considering prior knowledge acquired from a base classifier (Fig. 8, step 850 called adapting base classifier, Paragraph [0127] - SHI discloses the method 800 may also comprise, in an operation titled “SELECTING NOVEL CLASS TRAINING DATA”, selecting 840 training data for the one or more novel classes from the training data. The method 800 may further comprise, in an operation titled “ADAPTING BASE CLASSIFIER”, adapting 850 the base classifier to the one or more novel classes.); and (b) learning a parameterized weight coefficient (Fig. 2, Paragraph [0021] - SHI discloses the set of parameters, e.g., weights of the CNN or other model parameters, may be trained in order to obtain a feature representation as a compressed representation of the input instance that captures the information about the instance that is most relevant to the classification task at hand) of a novel classifier model (Fig. 8, Paragraph [0125] - SHI discloses FIG. 8 shows a block-diagram of computer-implemented method 800 of learning a set of transformation parameters. The set of transformation parameters may be for adapting a base classifier to one or more novel classes) during the training of the novel classifier (Fig. 8, Paragraph [0127] – SHI further discloses the method 800 may also comprise, in an operation titled “SELECTING NOVEL CLASS TRAINING DATA”, selecting 840 training data for the one or more novel classes from the training data.). Regarding claim 2, SHI teaches the method according to claim 1, SHI further teaches wherein in (a) above, a parameter of the base classifier previously learned for a set of base classes is considered as the prior knowledge (Fig. 6, Paragraph [0048] – SHI discloses FIG. 6 shows a detailed example of how a set of transformation parameters for adapting a base classifier to one or more novel classes may be learned.). Regarding claim 4, SHI teaches the method according to claim 1, SHI further teaches wherein in (b) above, a parameter of the novel classifier model is parameterized with the weight coefficient (Fig. 4, Paragraph [0021] – SHI discloses the feature extractor may be parametrized by a set of parameters. The set of parameters, e.g., weights of the CNN or other model parameters, may be trained in order to obtain a feature representation as a compressed representation of the input instance that captures the information about the instance that is most relevant to the classification task at hand.), and then the weight coefficient is updated to streamline the training of the novel classifier (Fig. 4, Paragraph [0089] – SHI discloses performing bidirectional updates may allow information to be propagated among the different classes so that a better joint classifier is obtained.). Regarding claim 9, SHI teaches a system for classifying novel class objects (Fig. 1, Paragraph [0053] - SHI discloses FIG. 1 shows a system 100 for adapting a base classifier to one or more novel classes), comprising: an input interface device (Fig. 1, #120 called data interface, Paragraph [0053]) configured to receive prior knowledge from a base classifier (Fig. 1, Paragraph [0053] - SHI discloses the system 100 may comprise a data interface 120 and a processor subsystem 140 which may internally communicate via data communication 124. Data interface 120 may be for accessing data 050 representing the base classifier.); a memory (Fig. 9, 900 called computer readable medium, Paragraph [0129]) configured to store a program (Fig. 9, Paragraph [0129] - SHI discloses the executable code may be stored in a transitory or non-transitory manner. Examples of computer readable mediums include memory devices, optical storage devices, integrated circuits, servers, online software, etc.) that constructs and trains a novel classifier, by considering the prior knowledge of the base classifier (Fig. 9, Paragraph [0129] - SHI discloses the computer readable medium 900 may comprise transitory or non-transitory data 910 representing a set of transformation parameters for adapting a base classifier to one or more novel classes); and a processor configured to execute the program (Fig. 2, #240 called processor subsystem, Paragraph [0064]), wherein the program involves learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier (Fig. 2, Paragraph [0066] - SHI discloses processor subsystem 240 may be configured to, during operation of the system, learn the set of transformation parameters. To learn the set of transformation parameters, during the operation, processor subsystem 240 may perform a repeated process. SHI further discloses the repeated process may further comprise selecting training data for the one or more novel classes from the training data. The repeated process may also comprise adapting the base classifier to the one or more novel classes according to a method described herein using the set of transformation parameters.). Regarding claim 10, SHI teaches the system according to claim 9, SHI further teaches wherein the prior knowledge is a parameter of the base classifier previously learned for a set of base classes (Fig. 6, Paragraph [0048] – SHI discloses FIG. 6 shows a detailed example of how a set of transformation parameters for adapting a base classifier to one or more novel classes may be learned.). Regarding claim 12, SHI teaches the system according to claim 9, SHI further teaches wherein the processor (Fig. Fig. 2, #240 called processor subsystem, Paragraph [0064, 0066]) may involve the process of parameterizing a parameter of the novel classifier model with the weight coefficient (Fig. 4, Paragraph [0021] – SHI discloses the feature extractor may be parametrized by a set of parameters. The set of parameters, e.g., weights of the CNN or other model parameters, may be trained in order to obtain a feature representation as a compressed representation of the input instance that captures the information about the instance that is most relevant to the classification task at hand.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over SHI (US 20210012226 A1), hereinafter referenced as SHI in view of WANG (US 20220343638 A1), hereinafter referenced as WANG. Regarding claim 3, SHI teaches the method according to claim 1. Although SHI explicitly teaches a preset number of vectors is used as additional basis vectors to construct the novel classifier capable of expressing any novel class (Fig. 3, Paragraph [0077] – SHI discloses Classifier C may comprise class representations CR1, 351, up to CRn, 352 of one or more classes into which instances can be classified. For example, if classifier C is a base classifier, then the class representations may be of base classes, whereas if classifier C is a joint classifier, then the class representations may be of base classes and novel classes. Class representations CRi are typically vectors.) SHI fails to explicitly teach Gaussian random vectors. However, WANG explicitly teaches Gaussian random vectors (Fig. 4, Paragraph [0133] – WANG discloses the composite images provided with classification labels are generated based on the preset classification labels, the one-dimensional Gaussian random vectors and the preset generator model, and the composite image label pairs are finally generated. WANG further discloses the training is stopped when the first loss function, the second loss function and the third loss function are all converged to obtain the ternary generative adversarial network, that is, to obtain the trained classification model. Here, the trained classification model includes the trained generator model, the trained discriminator model and the trained classifier model.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of SHI of having a method of classifying novel class objects, comprising: (a) constructing a novel classifier considering prior knowledge acquired from a base classifier; and (b) learning a parameterized weight coefficient of a novel classifier model, with the teachings of WANG of having wherein Gaussian random vectors. Wherein having SHI’s method of classifying novel class objects wherein in (a) above, a preset number of Gaussian random vectors is used as additional basis vectors to construct the novel classifier capable of expressing any novel class. The motivation behind the modification would have been to obtain a method for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and WANG relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and WANG has a smart diagnosis assistance method based on medical images, which may be applied to classify medical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and WANG (US 20220343638 A1), Paragraph [0092]. Regarding claim 11, SHI teaches the system according to claim 9. Although SHI explicitly teaches wherein the processor uses a preset number of vectors as additional basis vectors to construct the novel classifier (Fig. 3, Paragraph [0077] – SHI discloses Classifier C may comprise class representations CR1, 351, up to CRn, 352 of one or more classes into which instances can be classified. For example, if classifier C is a base classifier, then the class representations may be of base classes, whereas if classifier C is a joint classifier, then the class representations may be of base classes and novel classes. Class representations CRi are typically vectors.) SHI fails to explicitly teach Gaussian random vectors. However, WANG explicitly teaches Gaussian random vectors (Fig. 4, Paragraph [0133] – WANG discloses the composite images provided with classification labels are generated based on the preset classification labels, the one-dimensional Gaussian random vectors and the preset generator model, and the composite image label pairs are finally generated. WANG further discloses the training is stopped when the first loss function, the second loss function and the third loss function are all converged to obtain the ternary generative adversarial network, that is, to obtain the trained classification model. Here, the trained classification model includes the trained generator model, the trained discriminator model and the trained classifier model.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of SHI of having a system for classifying novel class objects, comprising: an input interface device configured to receive prior knowledge from a base classifier; a memory configured to store a program that constructs and trains a novel classifier, by considering the prior knowledge of the base classifier; and a processor configured to execute the program, wherein the program involves learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of WANG of having wherein Gaussian random vectors. Wherein having SHI’s system for classifying novel class objects wherein in (a) above, wherein the processor uses a preset number of Gaussian random vectors as additional basis vectors to construct the novel classifier. The motivation behind the modification would have been to obtain a system for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and WANG relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and WANG has a smart diagnosis assistance method based on medical images, which may be applied to classify medical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and WANG (US 20220343638 A1), Paragraph [0092]. Claims 5-8 and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over SHI (US 20210012226 A1), hereinafter referenced as SHI in view of BUTLER (US 20220414869 A1), hereinafter referenced as BUTLER. Regarding claim 5, SHI teaches the method according to claim 1. Although SHI further teaches further comprising (c) performing object recognition for a novel class by applying it to an object recognition model (Fig. 1, Paragraph [0063] – SHI discloses system 100 may be configured to determine a classification of a query image obtained from camera 180 using the joint classifier [wherein the joint classifier is an object recognition model] to detect an object of interest in an environment of the vehicle, for example, a traffic sign.), SHI fails to explicitly teach wherein in (c) above, a feature map (hereinafter, referred to as "first feature map") is obtained through a backbone network by processing an input image, a feature map in a feature pyramid network (hereinafter, referred to as "FPN feature map") is obtained using the first feature map, and a result of object recognition in the input image is output. However, BUTLER explicitly teaches wherein in (c) above, a feature map (hereinafter, referred to as "first feature map") (Fig. 5A, #518 called feature map, Paragraph [0057]) is obtained through a backbone network by processing an input image (Fig. 1, Paragraph [0033] – BUTLER discloses in the architecture 100, the Region Proposal Network (RPN) may start with the input image being fed into the backbone convolutional neural network. The input image may be first resized such that its shortest side is 600 px with the longer side not exceeding 1000 px. The backbone network may then the input image into vectors and to feed next level of layers. For every point in the output feature map, the network may learn whether an object is present in the input image at its corresponding location and estimate its size.), a feature map in a feature pyramid network (hereinafter, referred to as "FPN feature map") (Fig. 5A, #518’A—N called feature maps, Paragraph [0057]) is obtained (Fig. 5A, Paragraph [0037] – BUTLER discloses feature pyramid network (FPN) may be used to generate multiple feature map layers with better quality information) using the first feature map (Fig. 5A, #518 called feature map, Paragraph [0057] – BUTLER discloses the output of the proposal generator 506 may include a set of feature maps 518′A—N (hereinafter generally referred to as feature maps 518′). Each feature map 518′ (sometimes herein referred to as proposals or proposed regions) may be an output generated using the feature map 518 and a corresponding initial anchor box 520.), and a result of object recognition in the input image is output (Fig. 5A, Paragraph [0063] – BUTLER discloses the object classifier 508 may determine the object type 524 for the object corresponding to the ROI 516 based on the pixels included in the feature map 518′. The model applier 325 may obtain, retrieve, or otherwise identify the set of object types 524 as the output 522 from the object detection system 305.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of SHI of having a method of classifying novel class objects, comprising: (a) constructing a novel classifier considering prior knowledge acquired from a base classifier; and (b) learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of BUTLER of having wherein in (c) above, a feature map (hereinafter, referred to as "first feature map") is obtained through a backbone network by processing an input image, a feature map in a feature pyramid network (hereinafter, referred to as "FPN feature map") is obtained using the first feature map, and a result of object recognition in the input image is output. Wherein having SHI’s method of classifying novel class objects wherein in (c) above, a feature map (hereinafter, referred to as "first feature map") is obtained through a backbone network by processing an input image, a feature map in a feature pyramid network (hereinafter, referred to as "FPN feature map") is obtained using the first feature map, and a result of object recognition in the input image is output. The motivation behind the modification would have been to obtain a method for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and BUTLER relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and BUTLER has systems, methods, and non-transitory computer-readable medium for training models to detect regions of interests (ROIs) in biomedical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and BUTLER (US 20220414869 A1), Paragraph [0032]. Regarding claim 6, SHI in view of BUTLER teach the method according to claim 5. SHI fails to explicitly teach wherein in (c) above, the FPN feature map is obtained by attaching a convolutional layer to the first feature map or merging a result of attaching the convolutional layer to the first feature map with an upsampled FPN feature map for calculation. However, BUTLER explicitly teaches wherein in (c) above, the FPN feature map is obtained by attaching a convolutional layer to the first feature map (Fig. 5C, Paragraph [0079] – BUTLER discloses each convolution block 530 of the feature extractor 502 may include a set of transform layers 538A-N (hereinafter generally referred to as the set of transform layers 538). The set of transform layers 538 can include one or more kernels (sometimes herein referred to as weights or parameters) to process the input to produce or generate the feature map 518.) or merging a result of attaching the convolutional layer to the first feature map with an upsampled FPN feature map for calculation (Fig. 2, Paragraph [0037] – BUTLER discloses the FPN may be composed of convolutional networks that handles feature extraction and image reconstruction. As the feature extraction is progressing, the spatial resolution of the layer may decrease. With more high-level structures detected, the semantic value for each layer may increase.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of SHI in view of BUTLER of having a method of classifying novel class objects, comprising: (a) constructing a novel classifier considering prior knowledge acquired from a base classifier; and (b) learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of BUTLER of having wherein in (c) above, the FPN feature map is obtained by attaching a convolutional layer to the first feature map or merging a result of attaching the convolutional layer to the first feature map with an upsampled FPN feature map for calculation. Wherein having SHI’s method of classifying novel class objects, wherein in (c) above, the FPN feature map is obtained by attaching a convolutional layer to the first feature map or merging a result of attaching the convolutional layer to the first feature map with an upsampled FPN feature map for calculation. The motivation behind the modification would have been to obtain a method for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and BUTLER relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and BUTLER has systems, methods, and non-transitory computer-readable medium for training models to detect regions of interests (ROIs) in biomedical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and BUTLER (US 20220414869 A1), Paragraph [0032]. Regarding claim 7, SHI in view of BUTLER teach the method according to claim 5. SHI fails to explicitly teach wherein in (c) above, a classification head, a centerness head, a regression head, and a controller head associated with the FPN feature map are used, where the controller head is used to set a parameter of a mask head for object recognition, and the result of object recognition is output through an instance-wise mask head. However, BUTLER explicitly teaches wherein in in (c) above, a classification head (Fig. 5A, #508 called object classifier, Paragraph [0058]), a centerness head (Fig. 6, #330 called output evaluator, Paragraph [0089] – BUTLER discloses the output evaluator 330 may identify a centroid (e.g., using x-y coordinates relative to the biomedical image 605) of each bounding box 625 produced by the object detection model 335 or each predicted mask 635 generated by the instance segmentation model 340), a regression head (Fig. 5F, Paragraph [0083] – BUTLER discloses the transform layer 544 of the box selector 510 may include a regression layer. The regression layer of the box selector 510 may include a linear regression function, a logistic regression function, and a least squares regression function, among others.), and a controller head associated with the FPN feature map are used (Fig. 4, #320 called model trainer, Paragraph [0076] – BUTLER discloses the model trainer 320 may update one or more kernels in the object detection model 335), where the controller head is used to set a parameter of a mask head for object recognition (Fig. 4, #320 called model trainer, Paragraph [0076] – BUTLER discloses the model trainer 320 may update one or more kernels in the object detection model 335 (including the feature extractor 502, the region proposer 504, proposal generator 506, the object classifier 508, and the box selector 510) and instance segmentation model 340 (including the mask head 512)), and the result of object recognition is output through an instance-wise mask head (Fig. 5A, Paragraph [0055] – BUTLER discloses the instance segmentation model 340 may include at least one mask head 512. Paragraph [0068] – BUTLER discloses based on the confidence score, the mask head 512 may select the predicted mask 526 for the output 522.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of SHI in view of BUTLER of having a method of classifying novel class objects, comprising: (a) constructing a novel classifier considering prior knowledge acquired from a base classifier; and (b) learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of BUTLER of having wherein in (c) above, a classification head, a centerness head, a regression head, and a controller head associated with the FPN feature map are used, where the controller head is used to set a parameter of a mask head for object recognition, and the result of object recognition is output through an instance-wise mask head. Wherein having SHI’s method of classifying novel class objects, wherein in (c) above, a classification head, a centerness head, a regression head, and a controller head associated with the FPN feature map are used, where the controller head is used to set a parameter of a mask head for object recognition, and the result of object recognition is output through an instance-wise mask head. The motivation behind the modification would have been to obtain a method for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and BUTLER relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and BUTLER has systems, methods, and non-transitory computer-readable medium for training models to detect regions of interests (ROIs) in biomedical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and BUTLER (US 20220414869 A1), Paragraph [0032]. Regarding claim 8, SHI in view of BUTLER teach the method according to claim 7. Although SHI further teaches the object recognition model (Fig. 1, Paragraph [0063] – SHI discloses system 100 may be configured to determine a classification of a query image obtained from camera 180 using the joint classifier [wherein the joint classifier is an object recognition model] to detect an object of interest in an environment of the vehicle, for example, a traffic sign.), SHI fails to explicitly teach wherein in (c) above, an objective function used for training is constructed through a combination of an objective function to improve object classification performance, an objective function to find object centerness, an objective function for object bounding box regression, and an objective function for mask-based object recognition, and the object recognition model is trained by fine-tuning the classification head, centerness head, regression head, and controller head of the object recognition model. However, BUTLER explicitly teaches wherein in (c) above, an objective function used for training is constructed (Fig. 3, Paragraph [0076] – BUTLER discloses the model trainer 320 may use the objective function with a set learning rate, a momentum, and a weight decay for a number of iterations in training) through a combination of an objective function to improve object classification performance (Fig. 3, Paragraph [0072] – BUTLER discloses when the difference in location or classification error is higher, the loss metric determined by the model trainer 320 may be higher), an objective function to find object centerness (Fig. 6, #330 called output evaluator, Paragraph [0089] – BUTLER discloses for each pair of bounding boxes 625, the output evaluator 330 may calculate or determine a difference between the respective centroids.), an objective function for object bounding box regression (Fig. 5F, Paragraph [0083] – BUTLER discloses the transform layer 544 of the box selector 510 may include a regression layer. The regression layer of the box selector 510 may include a linear regression function, a logistic regression function, and a least squares regression function, among others.), and an objective function for mask-based object recognition (Fig. 4, Paragraph [0076] – BUTLER discloses the loss metrics used to update may include the loss metric for the object detection model 335 and the loss metric for the instance segmentation model 340. The updating of weights may be in accordance with an objective function for the object detection model 335 and instance segmentation model 340.), and the object recognition model (Fig. 3, #335 called object detection model, Paragraph [0045-0046]) is trained by fine-tuning the classification head (Fig. 2, Paragraph [0035] – BUTLER discloses the classification layer may output whether the object is found or not for the anchor. In the object detection model, the network layers may be modified, so that the layer can fit into the network properly. By using this network, the data may be fine-tuned to be able to detect implants), centerness head (Fig. 6, Paragraph [0090] – BUTLER discloses across the set of biomedical image 605, the output evaluator 330 may identify a centroid (e.g., x and y coordinates) of the corresponding bounding box 625 produced by the object detection model 335 or the predicted mask 635 generated by the instance segmentation model 340.), regression head (Fig. 2, Paragraph [0035] – BUTLER discloses the regression layer may output the detected object bounding box co-ordinates. In the object detection model, the network layers may be modified, so that the layer can fit into the network properly. By using this network, the data may be fine-tuned to be able to detect implants), and controller head (Fig. 4, #320 called model trainer, Paragraph [0076]) of the object recognition model (Fig. 4, Paragraph [0069] – BUTLER discloses with the production of the output 522, the model trainer 320 may compare the output 522 with the example 405 of the training dataset 350 to determine at least one loss metric. The model trainer 320 may calculate, generate, or otherwise determine one or more loss metrics (also referred herein as localization loss) for the object detection model 335.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of SHI in view of BUTLER of having a method of classifying novel class objects, comprising: (a) constructing a novel classifier considering prior knowledge acquired from a base classifier; and (b) learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of BUTLER of having wherein in (c) above, an objective function used for training is constructed through a combination of an objective function to improve object classification performance, an objective function to find object centerness, an objective function for object bounding box regression, and an objective function for mask-based object recognition, and the object recognition model is trained by fine-tuning the classification head, centerness head, regression head, and controller head of the object recognition model. Wherein having SHI’s method of classifying novel class objects, wherein in (c) above, an objective function used for training is constructed through a combination of an objective function to improve object classification performance, an objective function to find object centerness, an objective function for object bounding box regression, and an objective function for mask-based object recognition, and the object recognition model is trained by fine-tuning the classification head, centerness head, regression head, and controller head of the object recognition model. The motivation behind the modification would have been to obtain a method for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and BUTLER relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and BUTLER has systems, methods, and non-transitory computer-readable medium for training models to detect regions of interests (ROIs) in biomedical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and BUTLER (US 20220414869 A1), Paragraph [0032]. Regarding claim 13, SHI teaches the system according to claim 9. Although SHI further teaches wherein the processor (Fig. Fig. 2, #240 called processor subsystem, Paragraph [0064, 0066]) applies a method for classifying novel class objects to an object recognition model (Fig. 1, Paragraph [0063] – SHI discloses system 100 may be configured to determine a classification of a query image obtained from camera 180 using the joint classifier [wherein the joint classifier is an object recognition model] to detect an object of interest in an environment of the vehicle, for example, a traffic sign.), SHI fails to explicitly teach to obtain a first feature map through a backbone network by processing an input image, to obtain a feature pyramid network (FPN) feature map using the first feature map, and to output a result of object recognition in the input image. However, BUTLER explicitly teaches to obtain a first feature map (Fig. 5A, #518 called feature map, Paragraph [0057]) through a backbone network by processing an input image (Fig. 1, Paragraph [0033] – BUTLER discloses in the architecture 100, the Region Proposal Network (RPN) may start with the input image being fed into the backbone convolutional neural network. The input image may be first resized such that its shortest side is 600 px with the longer side not exceeding 1000 px. The backbone network may then the input image into vectors and to feed next level of layers. For every point in the output feature map, the network may learn whether an object is present in the input image at its corresponding location and estimate its size.), to obtain a feature pyramid network (FPN) feature map (Fig. 5A, #518’A—N called feature maps, Paragraph [0057]) using the first feature map (Fig. 5A, #518 called feature map, Paragraph [0057] – BUTLER discloses the output of the proposal generator 506 may include a set of feature maps 518′A—N (hereinafter generally referred to as feature maps 518′). Each feature map 518′ (sometimes herein referred to as proposals or proposed regions) may be an output generated using the feature map 518 and a corresponding initial anchor box 520), and to output a result of object recognition in the input image (Fig. 5A, Paragraph [0063] – BUTLER discloses the object classifier 508 may determine the object type 524 for the object corresponding to the ROI 516 based on the pixels included in the feature map 518′. The model applier 325 may obtain, retrieve, or otherwise identify the set of object types 524 as the output 522 from the object detection system 305.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of SHI of having a system for classifying novel class objects, comprising: an input interface device configured to receive prior knowledge from a base classifier; a memory configured to store a program that constructs and trains a novel classifier, by considering the prior knowledge of the base classifier; and a processor configured to execute the program, wherein the program involves learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of BUTLER of having to obtain a first feature map through a backbone network by processing an input image, to obtain a feature pyramid network (FPN) feature map using the first feature map, and to output a result of object recognition in the input image. Wherein having SHI’s system for classifying novel class objects wherein to obtain a first feature map through a backbone network by processing an input image, to obtain a feature pyramid network (FPN) feature map using the first feature map, and to output a result of object recognition in the input image. The motivation behind the modification would have been to obtain a system for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and BUTLER relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and BUTLER has systems, methods, and non-transitory computer-readable medium for training models to detect regions of interests (ROIs) in biomedical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and BUTLER (US 20220414869 A1), Paragraph [0032]. Regarding claim 14, SHI in view of BUTLER teach the system according to claim 13. Although SHI explicitly teaches wherein the processor (Fig. Fig. 2, #240 called processor subsystem, Paragraph [0064, 0066]). SHI fails to explicitly teach wherein the processor attaches a convolutional layer to the first feature map merging a result of attaching the convolutional layer to the first feature map with an upsampled FPN feature map for calculation. However, BUTLER explicitly teaches wherein the processor (Fig. 3, Paragraph [0044] – BUTLER discloses each of the components of the system 300 (e.g., the object detection system 305, its components, and the device 310) may be implemented using hardware (e.g., one or more processors coupled with memory) or a combination of hardware and software) attaches a convolutional layer to the first feature map (Fig. 5C, Paragraph [0079] – BUTLER discloses each convolution block 530 of the feature extractor 502 may include a set of transform layers 538A-N (hereinafter generally referred to as the set of transform layers 538). The set of transform layers 538 can include one or more kernels (sometimes herein referred to as weights or parameters) to process the input to produce or generate the feature map 518.) merging a result of attaching the convolutional layer to the first feature map with an upsampled FPN feature map for calculation (Fig. 2, Paragraph [0037] – BUTLER discloses the FPN may be composed of convolutional networks that handles feature extraction and image reconstruction. As the feature extraction is progressing, the spatial resolution of the layer may decrease. With more high-level structures detected, the semantic value for each layer may increase). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of SHI in view of BUTLER of having a system for classifying novel class objects, comprising: an input interface device configured to receive prior knowledge from a base classifier; a memory configured to store a program that constructs and trains a novel classifier, by considering the prior knowledge of the base classifier; and a processor configured to execute the program, wherein the program involves learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of BUTLER of having wherein the processor attaches a convolutional layer to the first feature map merging a result of attaching the convolutional layer to the first feature map with an upsampled FPN feature map for calculation. Wherein having SHI’s system for classifying novel class objects wherein the processor attaches a convolutional layer to the first feature map merging a result of attaching the convolutional layer to the first feature map with an upsampled FPN feature map for calculation. The motivation behind the modification would have been to obtain a system for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and BUTLER relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and BUTLER has systems, methods, and non-transitory computer-readable medium for training models to detect regions of interests (ROIs) in biomedical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and BUTLER (US 20220414869 A1), Paragraph [0032]. Regarding claim 15, SHI in view of BUTLER teach the system according to claim 13. Although SHI further teaches wherein the processor (Fig. Fig. 2, #240 called processor subsystem, Paragraph [0064, 0066]). SHI fails to explicitly teach wherein the processor outputs the result of object recognition using a classification head, a centerness head, a regression head, and a controller head associated with the FPN feature map. However, BUTLER explicitly teaches wherein the processor (Fig. 3, Paragraph [0044] – BUTLER discloses each of the components of the system 300 (e.g., the object detection system 305, its components, and the device 310) may be implemented using hardware (e.g., one or more processors coupled with memory) or a combination of hardware and software) outputs the result of object recognition (Fig. 5A, Paragraph [0063] – BUTLER discloses the object classifier 508 may determine the object type 524 for the object corresponding to the ROI 516 based on the pixels included in the feature map 518′. The model applier 325 may obtain, retrieve, or otherwise identify the set of object types 524 as the output 522 from the object detection system 305.) using a classification head (Fig. 5A, #508 called object classifier, Paragraph [0058]), a centerness head (Fig. 6, #330 called output evaluator, Paragraph [0089] – BUTLER discloses the output evaluator 330 may identify a centroid (e.g., using x-y coordinates relative to the biomedical image 605) of each bounding box 625 produced by the object detection model 335 or each predicted mask 635 generated by the instance segmentation model 340), a regression head (Fig. 5F, Paragraph [0083] – BUTLER discloses the transform layer 544 of the box selector 510 may include a regression layer. The regression layer of the box selector 510 may include a linear regression function, a logistic regression function, and a least squares regression function, among others.), and a controller head associated with the FPN feature map (Fig. 4, #320 called model trainer, Paragraph [0076] – BUTLER discloses the model trainer 320 may update one or more kernels in the object detection model 335). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of SHI in view of BUTLER of having a system for classifying novel class objects, comprising: an input interface device configured to receive prior knowledge from a base classifier; a memory configured to store a program that constructs and trains a novel classifier, by considering the prior knowledge of the base classifier; and a processor configured to execute the program, wherein the program involves learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of BUTLER of having wherein the processor outputs the result of object recognition using a classification head, a centerness head, a regression head, and a controller head associated with the FPN feature map. Wherein having SHI’s system for classifying novel class objects wherein the processor outputs the result of object recognition using a classification head, a centerness head, a regression head, and a controller head associated with the FPN feature map. The motivation behind the modification would have been to obtain a system for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and BUTLER relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and BUTLER has systems, methods, and non-transitory computer-readable medium for training models to detect regions of interests (ROIs) in biomedical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and BUTLER (US 20220414869 A1), Paragraph [0032]. Regarding claim 16, SHI in view of BUTLER teach the system according to claim 15. SHI further teaches wherein the processor (Fig. Fig. 2, #240 called processor subsystem, Paragraph [0064, 0066]). Although SHI further teaches the object recognition model (Fig. 1, Paragraph [0063] – SHI discloses system 100 may be configured to determine a classification of a query image obtained from camera 180 using the joint classifier [wherein the joint classifier is an object recognition model] to detect an object of interest in an environment of the vehicle, for example, a traffic sign.), SHI fails to explicitly teach wherein the processor constructs an objective function used for training through a combination of an objective function to improve object classification performance, an objective function to find object centerness, an objective function for object bounding box regression, and an objective function for mask-based object recognition, and performs object recognition model training by fine-tuning the classification head, centerness head, regression head, and controller head of the object recognition model. However, BUTLER explicitly teaches wherein the processor (Fig. 3, Paragraph [0044] – BUTLER discloses each of the components of the system 300 (e.g., the object detection system 305, its components, and the device 310) may be implemented using hardware (e.g., one or more processors coupled with memory) or a combination of hardware and software) constructs an objective function used for training (Fig. 3, Paragraph [0076] – BUTLER discloses the model trainer 320 may use the objective function with a set learning rate, a momentum, and a weight decay for a number of iterations in training) through a combination of an objective function to improve object classification performance (Fig. 3, Paragraph [0072] – BUTLER discloses when the difference in location or classification error is higher, the loss metric determined by the model trainer 320 may be higher), an objective function to find object centerness (Fig. 6, #330 called output evaluator, Paragraph [0089] – BUTLER discloses for each pair of bounding boxes 625, the output evaluator 330 may calculate or determine a difference between the respective centroids.), an objective function for object bounding box regression (Fig. 5F, Paragraph [0083] – BUTLER discloses the transform layer 544 of the box selector 510 may include a regression layer. The regression layer of the box selector 510 may include a linear regression function, a logistic regression function, and a least squares regression function, among others.), and an objective function for mask-based object recognition (Fig. 4, Paragraph [0076] – BUTLER discloses the loss metrics used to update may include the loss metric for the object detection model 335 and the loss metric for the instance segmentation model 340. The updating of weights may be in accordance with an objective function for the object detection model 335 and instance segmentation model 340.), and performs object recognition model training (Fig. 3, #335 called object detection model, Paragraph [0045-0046]) by fine-tuning the classification head (Fig. 2, Paragraph [0035] – BUTLER discloses the classification layer may output whether the object is found or not for the anchor. In the object detection model, the network layers may be modified, so that the layer can fit into the network properly. By using this network, the data may be fine-tuned to be able to detect implants), centerness head (Fig. 6, Paragraph [0090] – BUTLER discloses across the set of biomedical image 605, the output evaluator 330 may identify a centroid (e.g., x and y coordinates) of the corresponding bounding box 625 produced by the object detection model 335 or the predicted mask 635 generated by the instance segmentation model 340.), regression head (Fig. 2, Paragraph [0035] – BUTLER discloses the regression layer may output the detected object bounding box co-ordinates. In the object detection model, the network layers may be modified, so that the layer can fit into the network properly. By using this network, the data may be fine-tuned to be able to detect implants), and controller head (Fig. 4, #320 called model trainer, Paragraph [0076]) of the object recognition model (Fig. 4, Paragraph [0069] – BUTLER discloses with the production of the output 522, the model trainer 320 may compare the output 522 with the example 405 of the training dataset 350 to determine at least one loss metric. The model trainer 320 may calculate, generate, or otherwise determine one or more loss metrics (also referred herein as localization loss) for the object detection model 335.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of SHI in view of BUTLER of having a system for classifying novel class objects, comprising: an input interface device configured to receive prior knowledge from a base classifier; a memory configured to store a program that constructs and trains a novel classifier, by considering the prior knowledge of the base classifier; and a processor configured to execute the program, wherein the program involves learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier, with the teachings of BUTLER of having wherein the processor constructs an objective function used for training through a combination of an objective function to improve object classification performance, an objective function to find object centerness, an objective function for object bounding box regression, and an objective function for mask-based object recognition, and performs object recognition model training by fine-tuning the classification head, centerness head, regression head, and controller head of the object recognition model. Wherein having SHI’s system for classifying novel class objects wherein the processor constructs an objective function used for training through a combination of an objective function to improve object classification performance, an objective function to find object centerness, an objective function for object bounding box regression, and an objective function for mask-based object recognition, and performs object recognition model training by fine-tuning the classification head, centerness head, regression head, and controller head of the object recognition model. The motivation behind the modification would have been to obtain a system for classifying novel objects that can combine fast learning on novel classes with slower learning on classes of the training dataset, where the set of transformation parameters may be determined with an aim of adapting fast and well across varying tasks involving previously unseen classes, since both SHI and BUTLER relate to classification methods and systems, wherein SHI has systems and methods for adapting a base classifier to one or more novel classes, and BUTLER has systems, methods, and non-transitory computer-readable medium for training models to detect regions of interests (ROIs) in biomedical images. Please see SHI (US 20210012226 A1), Paragraphs [0037, 0105] and BUTLER (US 20220414869 A1), Paragraph [0032]. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant’s disclosure. Xu et al. (US 11501438 B2) - Techniques for generating an enhanced cone-beam computed tomography (CBCT) image using a trained model are provided. A CBCT image of a subject is received. a synthetic computed tomography (sCT) image corresponding to the CBCT image is generated, using a generative model. The generative model is trained in a generative adversarial network (GAN). The generative model is further trained to process the CBCT image as an input and provide the sCT image as an output. The sCT image is presented for medical analysis of the subject.… Figs. 1, 4, Abstract. Jain et al. (US 11521396 B1) - Systems and methods are described that probabilistically predict dynamic object behavior. In particular, in contrast to existing systems which attempt to predict object trajectories directly (e.g., directly predict a specific sequence of well-defined states), a probabilistic approach is instead leveraged that predicts discrete probability distributions over object state at each of a plurality of time steps. In one example, systems and methods predict future states of dynamic objects (e.g., pedestrians) such that an autonomous vehicle can plan safer actions/movement… Figs. 1, 3, Abstract. Hoshen et al. (WO 2021191908 A1) - A method comprising: receiving, as input, training images, wherein at least a majority of the training images represent normal data instances; receiving, as input, a target image; extracting (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image; calculating, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation; and determining that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.… Fig. 1, Abstract. Vijayakumar et al. (US 20210295155 A1) - Image analysis is a vital field since images can provide contextual, environmental, and emotional factors. Conventional methods are facing challenges in analyzing an image accurately when the image is having lesser data or if the image is having less resolution. Conventional machine learning architectures are computationally intensive when run on high power computing devices for training and inference. The present disclosure provides a robust deep learning model to inference in any given environmental condition. Initially, image data is generated using a pre-trained Generative Adversarial Network (GAN). The GAN receives a plurality of images of varying domain and generates image data. The image data is annotated and segmented to obtain a contextual label map.… Figs. 2-4, Abstract. Singh et al. (US 20210124993 A1) - The present disclosure relates to systems, methods, and non-transitory computer readable media for training a classification neural network to classify digital images in few-shot tasks based on self-supervision and manifold mixup. For example, the disclosed systems can train a feature extractor as part of a base neural network utilizing self-supervision and manifold mixup. Indeed, the disclosed systems can apply manifold mixup regularization over a feature manifold learned via self-supervised training such as rotation training or exemplar training. Based on training the feature extractor, the disclosed systems can also train a classifier to classify digital images into novel classes not present within the base classes used to train the feature extractor..… Figs. 2-5, Abstract. Legarreta Gorrono et al. (US 20220392058 A1) - A computer system that computes second tractography results is described. This computer may include: a computation device (such as a processor, a graphics processing unit or GPU, etc.) that executes program instructions; and memory that stores the program instructions. During operation, the computer system receives information specifying tractography results that specify a set of neurological fibers. Then, the computer system computes, using a predetermined (e.g., pretrained) autoencoder neural network, the second tractography results that specify a second set of neurological fibers based at least in part on the tractography results and information associated with a neurological anatomical region.… Fig. 1, Abstract. Wang et al. (US 20210406582 A1) - A method of semantically segmenting an input image using a neural network is provided. The method includes extracting features of the input image to generate one or more feature maps; and analyzing the one or more feature maps to generate a plurality of predictions respectively corresponding to a plurality of subpixels of the input image. Extracting features of the input image is performed using a residual network having N number of residual blocks, N being a positive integer greater than 1. Analyzing the one or more feature maps is performed through M number of feature analyzing branches to generate M sets of predictions..… Figs. 1, 6, 7, Abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEZAWIT N SHIMELES whose telephone number is (571)272-7663. The examiner can normally be reached M-F 7:30am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEZAWIT NOLAWI SHIMELES/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Feb 07, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month