Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on February 26, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 – 5, 14 – 18, 23 and 24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mensink et al. US Patent Application Publication No. US-20120269436-A1 (hereinafter Mensink).
Regarding claim 1, Mensink discloses about an anomaly labeled-assistant detection system, comprising (Mensink in [0028] and [0031] discloses identifying incorrect or uncertain labels by comparing existing labels with model based predictions): a computing apparatus, comprising an anomaly labeled detection model, wherein the computing apparatus (Mensink in [0037] discloses, “The system 10 stores a set of one or more structured prediction models 42 (or simply, “models”) that take into account dependencies among image labels”), detects a plurality of pieces of labeled image data with a labeled category through the anomaly labeled detection model (Mensink in [0033] discloses, “The training images 12 are each manually labeled with one or more labels 14 drawn from the set 18 of labels. These manually assigned labels may be the output of a single user or computed from the labels assigned by a set of users” wherein set of training images are plurality of pieces of labeled image data), the anomaly labeled detection model respectively generates an inference category corresponding to each piece of labeled image data (Mensink in [0013] discloses, “Feature-based predictions for values of labels in the set of labels are generated based on features extracted from the image”. Additionally, in [0036] discloses, “the classifier 40 outputs a feature function which, for each label, indicates whether that label is true”), and the computing apparatus compares the labeled category and the inference category according to each piece of labeled image data (Mensink in [0041] discloses, “the system 10 operates to predict labels 14 for one or more images 16 without user interaction. In the interactive mode, the trained system 10 operates to predict labels 14 for a set of images 16 where a user is asked to confirm or reject some of the image labels 14 , e.g., via GUI 28 . Predictions for further labels 14 are then conditioned on the user's input”), and automatically lists the labeled image data as anomaly labeled data when the labeled category of the labeled image data is different from the inference category (Mensink in [0041] discloses, “In the fully automated mode, the system 10 operates to predict labels 14 for one or more images 16 without user interaction. In the interactive mode, the trained system 10 operates to predict labels 14 for a set of images 16 where a user is asked to confirm or reject some of the image labels 14 , e.g., via GUI 28 . Predictions for further labels 14 are then conditioned on the user's input”); and a storage apparatus, electrically connected to the computing apparatus, wherein the storage apparatus stores the plurality of pieces of labeled image data (Mensink in [0056] discloses, “At S 112 , at test time, one or more images 16 to be labeled are received, and may be stored in computer memory 22”).
Summary of Citations (Mensink)
Paragraph [0013]; “Feature-based predictions for values of labels in the set of labels are generated based on features extracted from the image”.
Paragraph [0014]; “a method for generating an annotation system includes receiving a training set of manually-labeled training images ... The method further includes acquiring mutual information between pairs of labels in a set of labels based on the training images”.
Paragraph [0028]; “The system can be used for fully-automatic image annotation as well as in an interactive mode, where a user provides the value of some of the image labels. In the interactive embodiment, the structured models can be used to decide which labels should be assigned by the user, and to infer the remaining labels conditioned on the user's responses”.
Paragraph [0031]; “With reference to FIG. 1, a probabilistic image labeling system 10 has been trained with a set 12 of training images 13 to predict labels 14 for new images 16 , which are generally not a part of the training set 12 . A method for generating and then using the generated system 10 to predict labels is illustrated in FIG. 2 and is described below”.
Paragraph [0033]; “The training images 12 are each manually labeled with one or more labels 14 drawn from the set 18 of labels. These manually assigned labels may be the output of a single user or computed from the labels assigned by a set of users”.
Paragraph [0036]; “the classifier 40 outputs a feature function which, for each label, indicates whether that label is true”.
Paragraph [0037]; “The system 10 stores a set of one or more structured prediction models 42 (or simply, “models”) that take into account dependencies among image labels”.
Paragraph [0041]; “In the fully automated mode, the system 10 operates to predict labels 14 for one or more images 16 without user interaction. In the interactive mode, the trained system 10 operates to predict labels 14 for a set of images 16 where a user is asked to confirm or reject some of the image labels 14 , e.g., via GUI 28 . Predictions for further labels 14 are then conditioned on the user's input”.
Paragraph [0056]; “At S 112 , at test time, one or more images 16 to be labeled are received, and may be stored in computer memory 22”.
Regarding claim 2, Mensink discloses about the anomaly labeled-assistant detection system according to claim 1, wherein the computing apparatus obtains the plurality of pieces of labeled image data with the labeled category through artificial intelligence software (Mensink in [0002] discloses, “fully automatic systems are used where image labels are automatically predicted without any user interaction”).
Summary of Citations (Mensink)
Paragraph [0002]; “fully automatic systems are used where image labels are automatically predicted without any user interaction”.
Regarding claim 3, Mensink discloses about the anomaly labeled-assistant detection system according to claim 2, wherein the computing apparatus identifies and labels a plurality of pieces of image data through the artificial intelligence software to label a to-be-identified region of each piece of image data (Mensink in [0075] discloses, “the classifier 40 includes or accesses a patch extractor, which extracts and analyzes content-related features of patches of the image 13 , 16 , such as shape, texture, color, or the like. The patches can be obtained by image segmentation, by applying specific interest point detectors”. Additionally, in [0005] discloses, “the goal was to label the regions in a pre-segmented image with category labels”), and crops the to-be-identified region according to an image category of each piece of image data to generate the plurality of pieces of labeled image data (Mensink in [0032] discloses, “in some embodiments, reduced pixel resolution images, cropped images, or representations of the images derived from the pixel ... may alternatively or additionally be received and processed”. Additionally, Mensink in [0075] discloses, “Each patch vector is then assigned to a nearest cluster and a histogram of the assignments can be generated ... The visual words may each correspond (approximately) to a mid-level image feature such as a type of visual (rather than digital) object (e.g., ball or sphere, rod or shaft, etc.) ... or the like”), and then respectively creates a data set corresponding to each piece of labeled image data according to the labeled category (Mensink in [0051] discloses, “At S 102 , a set 12 of manually-labeled training images 13 is provided, each image having at least one manually-assigned label from a finite set 18 of labels”. Additionally, Mensink in [0036] discloses about labeled category, “classifier system 40 may include a set of binary classifiers, each trained on a respective one of the categories (labels) in the set 18”).
Summary of Citations (Mensink)
Paragraph [0005]; “the goal was to label the regions in a pre-segmented image with category labels”.
Paragraph [0032]; “in some embodiments, reduced pixel resolution images, cropped images, or representations of the images derived from the pixel ... may alternatively or additionally be received and processed”.
Paragraph [0036]; “classifier system 40 may include a set of binary classifiers, each trained on a respective one of the categories (labels) in the set 18”.
Paragraph [0075]; “the classifier 40 includes or accesses a patch extractor, which extracts and analyzes content-related features of patches of the image 13 , 16 , such as shape, texture, color, or the like. The patches can be obtained by image segmentation, by applying specific interest point detectors, by considering a regular grid, or simply by random sampling of image patches ... Each patch vector is then assigned to a nearest cluster and a histogram of the assignments can be generated ... The visual words may each correspond (approximately) to a mid-level image feature such as a type of visual (rather than digital) object (e.g., ball or sphere, rod or shaft, etc.) ... or the like”.
Regarding claim 4, Mensink discloses the anomaly labeled-assistant detection system according to claim 3, wherein the image category is an object identification image, an image segmentation image, or an image classification image (Mensink in [0005] discloses, “the goal was to label the regions in a pre-segmented image with category labels”).
Summary of Citations (Mensink)
Paragraph [0005]; “the goal was to label the regions in a pre-segmented image with category labels”.
Regarding claim 5, Mensink discloses the anomaly labeled-assistant detection system according to claim 3, wherein before the computing apparatus performs single-category anomaly labeled detection through the anomaly labeled detection model, the anomaly labeled detection model is first established (Mensink in [0031] discloses, “With reference to FIG. 1, a probabilistic image labeling system 10 has been trained with a set 12 of training images 13 to predict labels 14 for new images 16 , which are generally not a part of the training set 12”. Additionally, in [0003] discloses about single category detection, “Most work on image annotation, object category recognition, and image categorization has focused on methods that deal with one label or object category at a time”), and the method comprises: preparing at least ten pieces of the labeled image data in the data set corresponding to the single category (Mensink in [0051] discloses about set of training images implies to at least ten pieces, “At S 102 , a set 12 of manually-labeled training images 13 is provided, each image having at least one manually-assigned label from a finite set 18 of labels”. Furthermore, [0003] discloses about one label or object at a times equates to corresponding to single category); performing feature extraction on the labeled image data by using a pre-trained model, to extract a plurality of image features (Mensink in [0052] discloses, “At S 104 , features are extracted from each of the training images based on the visual content of the image and a feature representation is generated based on the extracted features (e.g., in the form of a features vector)”); and performing dimensionality reduction on the image features (Mensink in [0089 – 0090] discloses, “feature function of the image (such as a classifier score for the image for given label i, a Fisher vector element, or the like) ... For efficiency, compact feature functions of the form φi (x)=[si (x),1]T are used, , where si (x) is an SVM (support vector machine) score output by classifier system” wherein compact feature vector implies to dimensionality reduction) and inputting the image features into an initial detection model for training to obtain a trained anomaly labeled detection model (Mensink in [0054 – 0055] discloses, “At S 108 , one or more structured models 42 are generated, based on the labels 14 and on either the image features (S 104 ) or the classifier output (S 106 ) ... At S 110 , the trained classifier system 40 and structured models 42 are stored in computer memory 22 . This completes the training phase”).
Summary of Citations (Mensink)
Paragraph [0003]; “Most work on image annotation, object category recognition, and image categorization has focused on methods that deal with one label or object category at a time”.
Paragraph [0031]; “With reference to FIG. 1, a probabilistic image labeling system 10 has been trained with a set 12 of training images 13 to predict labels 14 for new images 16 , which are generally not a part of the training set 12”.
Paragraph [0033]; “there are a large number of such categories, such as at least fifty categories. The training images 12 are each manually labeled with one or more labels 14 drawn from the set 18 of labels”.
Paragraph [0051]; “At S 102 , a set 12 of manually-labeled training images 13 is provided, each image having at least one manually-assigned label from a finite set 18 of labels”.
Paragraph [0052]; “At S 104 , features are extracted from each of the training images based on the visual content of the image and a feature representation is generated based on the extracted features (e.g., in the form of a features vector)”.
Paragraph [0054 – 0055]; “At S 108 , one or more structured models 42 are generated, based on the labels 14 and on either the image features (S 104 ) or the classifier output (S 106 ) ... At S 110 , the trained classifier system 40 and structured models 42 are stored in computer memory 22 . This completes the training phase”.
Paragraph [0089 – 0090]; “feature function of the image (such as a classifier score for the image for given label i, a Fisher vector element, or the like) ... For efficiency, compact feature functions of the form φi (x)=[si (x),1]T are used, where si (x) is an SVM (support vector machine) score output by classifier system”.
Regarding claim 14, Mensink discloses the anomaly labeled-assistant detection system according to claim 1, further comprising a display apparatus electrically connected to the computing apparatus, wherein the computing apparatus further provides a user interface displayed on the display apparatus, to perform anomaly labeled detection through the user interface (Mensink in [0047] discloses, “The user's computer 28 which serves as the GUI may be a PC, such as a desktop ... the computer 28 includes a display device 80 , such as an LCD screen, plasma screen, or the like, which displays images to a user for labeling (in the interactive mode)”).
Summary of Citations (Mensink)
Paragraph [0047]; “The user's computer 28 which serves as the GUI may be a PC, such as a desktop ... the computer 28 includes a display device 80 , such as an LCD screen, plasma screen, or the like, which displays images to a user for labeling (in the interactive mode)”.
Regarding claim 15, method claim 15 corresponds to apparatus claim 1. Therefore, the rejection analysis of claim 1 is applicable to claim 15.
Regarding claim 16, method claim 16 corresponds to apparatus claim 3. Therefore, the rejection analysis of claim 3 is applicable to claim 16.
Regarding claim 17, method claim 17 corresponds to apparatus claim 4. Therefore, the rejection analysis of claim 4 is applicable to claim 17.
Regarding claim 18, method claim 18 corresponds to apparatus claim 5. Therefore, the rejection analysis of claim 5 is applicable to claim 18.
Regarding claim 23, method claim 23 corresponds to apparatus claim 3. Therefore, the rejection analysis of claim 3 is applicable to claim 23.
Regarding claim 24, method claim 24 corresponds to apparatus claim 4. Therefore, the rejection analysis of claim 4 is applicable to claim 24.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6 and 19 are rejected under 35 U.S.C 103 as being unpatentable over Mensink in view of Zhang Patent Application Publication No. WO-2024031999-A1 (hereinafter Zhang).
Regarding claim 6, Mensink discloses about the anomaly labeled-assistant detection system according to claim 5.
Mensink doesn’t disclose the limitations as further recited in the claim.
Zhang discloses about the computing apparatus performs dimensionality reduction on the image features by using a hierarchical clustering method (Zhang in [Page – 12, Paragraph – 6] discloses, “The clustering algorithm includes at least one of a clustering algorithm based on Euclidean distance, a hierarchical clustering algorithm, a nonlinear dimensionality reduction clustering algorithm, and a density-based clustering algorithm”).
It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Zhang into the system of Mensink because it would allow the system to more efficiently detect anomaly labeled data.
Summary of Citations (Zhang)
[Page – 12, Paragraph – 6]; “The clustering algorithm includes at least one of a clustering algorithm based on Euclidean distance, a hierarchical clustering algorithm, a nonlinear dimensionality reduction clustering algorithm, and a density-based clustering algorithm”.
Regarding claim 19, method claim 19 corresponds to apparatus claim 6. Therefore, the rejection analysis and motivation to combine of claim 6 is applicable to claim 19.
Claims 7 and 20 are rejected under 35 U.S.C 103 as being unpatentable over Mensink in view of Khan US Patent Application Publication No. US-20200380312-A1 (hereinafter Khan).
Regarding claim 7, Mensink discloses about the anomaly labeled-assistant detection system according to claim 5.
Mensink doesn’t disclose the limitations as further recited in the claim.
Khan discloses when the labeled category and the inference category generated by the anomaly labeled detection model are both incorrect, or when the labeled category is correct but the inference category generated by the anomaly labeled detection model is incorrect, labeled image data corresponding to the incorrect inference category is added to the data set to retrain the anomaly labeled detection model (Khan in [0048] discloses, updating the verified annotated training data 201 based on the plurality of annotations for each of the plurality of input data points based on the validation and generating at least one of the state-label mapping model 207 and the comparative ANN model 206 based on the verified annotated training data 201” wherein validation identifies wrong annotation (disclosed in [0036])).
It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Khan into the system of Mensink because it would allow the model to recognize its own misjudgment and would allow to improve the model by itself.
Summary of Citations (Khan)
Paragraph [0036]; “Based on the analysis, the validation module 309 may determine the missed and wrong annotations for a particular state using the data from the state-label mapping model 207 and the comparative ANN model 206”.
Paragraph [0048]; updating the verified annotated training data 201 based on the plurality of annotations for each of the plurality of input data points based on the validation and generating at least one of the state-label mapping model 207 and the comparative ANN model 206 based on the verified annotated training data 201”.
Regarding claim 20, method claim 20 corresponds to apparatus claim 7. Therefore, the rejection analysis and motivation to combine of claim 7 is applicable to claim 20.
Claims 8 – 13, 21, 22, 25 and 26 are rejected under 35 U.S.C 103 as being unpatentable over Mensink in view of Bai US Patent Application Publication No. US-20220292131-A1 (hereinafter Bai) and further in view of Chen Patent Application Publication No. WO-2023088174-A1 (hereinafter Chen).
Regarding claim 8, Mensink discloses the anomaly labeled-assistant detection system according to claim 2, wherein when the computing apparatus performs multi-category anomaly labeled detection by using the anomaly labeled detection model (Mensink in [0004] discloses, “A problem arises in classification when dealing with many classes, for example, when the aim is to assign a single label to an image from many possible ones, or when predicting the probability distribution over all labels for an image”), the anomaly labeled detection model further comprises (Mensink in [0028] discloses, “The exemplary system includes structured models for image labeling, which take into account label dependencies. These models are shown to be more expressive than independent label predictors, and can lead to more accurate predictions”), and lists several inference categories ranked by the similarity degree (Mensink in [0064] discloses, “At S 128 , the image is labeled, e.g., with one or more of the most probable labels computed at S 126” wherein most probable labels implies to ranked by similarity).
Mensink doesn’t disclose the limitations as further recited in the claim and strike through above.
Bai discloses about a coarse-grained model and a fine-grained model (Bai in [0048] discloses, “The target classification model may alternatively include two models: a coarse-grained classification model and a fine-grained classification model”), the computing apparatus extracts a plurality of first image features of the labeled image data through the coarse-grained model (Bai in [0039] discloses, “The identical feature, the similar feature and the category can be extracted through a feature model”. Furthermore, Bai in [0048] discloses, “The coarse-grained classification model may recognize 6 kinds of targets”), then calculates a similarity degree of the first image features through the fine-grained model (Bai in [0041] discloses, “The distance between the similar feature of the target image and the similar feature of the candidate image is calculated to obtain the similarity score of the candidate image”).
It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Bai into the system of Mensink because it would allow the system to efficiently scale many categories with less computational power.
Mensink and Bai in the combination doesn’t disclose the limitations as further recited in the claim and strike through above.
Chen discloses about the computing apparatus compares the labeled category of the labeled image data and the several ranked inference categories (Chen in [Page – 6, Paragraph – 2]; “The category loss value can be determined based on the predicted label corresponding to the labeled data and the calibration category corresponding to the labeled data” wherein calibration category is labeled category and loss value equates to comparing), and when the labeled category of the labeled image data is different from the several ranked inference categories, the labeled image data is listed as the anomaly labeled data (Chen in [Page – 16, Paragraph – 1] discloses, “the remaining pseudo-labels except the high-quality pseudo-labels corresponding to this category are used as uncertain pseudo-labels corresponding to this category”).
It would have been obvious to one of ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Chen into the system of Mensink in view of Bai because it would allow the system to avoid incorrectly marking valid image data as anomalous.
Summary of Citations (Mensink)
Paragraph [0004]; “A problem arises in classification when dealing with many classes, for example, when the aim is to assign a single label to an image from many possible ones, or when predicting the probability distribution over all labels for an image”.
Paragraph [0028]; “The exemplary system includes structured models for image labeling, which take into account label dependencies. These models are shown to be more expressive than independent label predictors, and can lead to more accurate predictions”.
Paragraph [0064]; “At S 128 , the image is labeled, e.g., with one or more of the most probable labels computed at S 126”.
Summary of Citations (Bai)
Paragraph [0039]; “The identical feature, the similar feature and the category can be extracted through a feature model”.
Paragraph [0041]; “The distance between the similar feature of the target image and the similar feature of the candidate image is calculated to obtain the similarity score of the candidate image”.
Paragraph [0048]; “The target classification model may alternatively include two models: a coarse-grained classification model and a fine-grained classification model. The coarse-grained classification model may recognize 6 kinds of targets”.
Summary of Citations (Chen)
[Page – 6, Paragraph – 2]; “The category loss value can be determined based on the predicted label corresponding to the labeled data and the calibration category corresponding to the labeled data”.
[Page – 16, Paragraph – 1]; “the remaining pseudo-labels except the high-quality pseudo-labels corresponding to this category are used as uncertain pseudo-labels corresponding to this category”.
Regarding claim 9, Chen in the combination discloses the anomaly labeled-assistant detection system according to claim 8, wherein the several inference categories ranked by the similarity degree are at most the top five (Chen in [Page – 7, Paragraph – 8] discloses, “the probability vector can be [ 0.9,0.06,0.04], based on the probability vector, it can be known that the predicted label corresponding to the predicted frame is category 1”).
Summary of Citations (Chen)
[Page – 7, Paragraph – 8]; “the probability vector can be [ 0.9,0.06,0.04], based on the probability vector, it can be known that the predicted label corresponding to the predicted frame is category 1, and the probability value corresponding to the predicted label is 0.9”.
Regarding claim 10, claim 10, which is similar in scope to claim 3, thus rejected under the same rationale.
Regarding claim 11, claim 11, which is similar in scope to claim 4, thus rejected under the same rationale.
Regarding claim 12, Mensink in the combination discloses the anomaly labeled-assistant detection system according to claim 10, wherein before the computing apparatus performs the multi-category anomaly labeled detection through the anomaly labeled detection model (Mensink in [0033] discloses, “The labels 14 are drawn from a predefined set 18 of labels (an “annotation vocabulary”), which may correspond to a set of visual categories” wherein drawing labels from a set implies to multi-category), and the method comprises: preparing at least ten pieces of the labeled image data in the data set corresponding to each category for training, to (Mensink in [0051] discloses about set of training images implies to at least ten pieces, “At S 102 , a set 12 of manually-labeled training images 13 is provided, each image having at least one manually-assigned label from a finite set 18 of labels”).
Mensink doesn’t disclose the limitations as further recited in the claim.
Bai discloses about the coarse-grained model and the fine-grained model are first established (Bai in [0048] discloses, “The target classification model may alternatively include two models: a coarse-grained classification model and a fine-grained classification model”), and extracting a plurality of second image features of the same at least ten pieces of the labeled image data through the coarse-grained model (Bai in [0076] discloses feature are extracted from target subject, “the extracting unit 502 is further configured to: extract the similar feature from the target subject through a similar feature model” and in in [0048] Bai discloses that the target classification model includes a coarse-grained classification model);
Mensink and Chen in the combination doesn’t disclose the limitations as further recited in the claim.
Chen discloses about establishing the fine-grained model by using the second image features, to perform the multi-category anomaly labeled detection through the coarse-grained model and the fine-grained model (Chen in [Page – 3, Paragraph – 8] discloses about initial learning model (coarse grained model) and initial management model (fine grained model), “Input the unlabeled data into the initial learning model to obtain the first predicted value corresponding to the unlabeled data; ... Input the unlabeled data into the initial management model to obtain a second predicted value corresponding to the unlabeled data”).
Summary of Citations (Chen)
[Page – 3, Paragraph – 8]; “Input the unlabeled data into the initial learning model to obtain the first predicted value corresponding to the unlabeled data; ... Input the unlabeled data into the initial management model to obtain a second predicted value corresponding to the unlabeled data”.
Summary of Citations (Bai)
Paragraph [0048]; “The target classification model may alternatively include two models: a coarse-grained classification model and a fine-grained classification model. The coarse-grained classification model may recognize 6 kinds of targets”.
Paragraph [0076]; “In some alternative implementations of this embodiment, the extracting unit 502 is further configured to: extract the similar feature from the target subject through a similar feature model;
Summary of Citations (Mensink)
Paragraph [0033]; “The labels 14 are drawn from a predefined set 18 of labels (an “annotation vocabulary”), which may correspond to a set of visual categories, such as landscape, frees, rocks, sky, male, female, single person, no person, animal, and the like”.
Regarding claim 13, claim 13 is claim 7 except for the coarse-grained model and the fine-grained model, thus the rejection of claim 7 is incorporated herein. With respect to the addition limitation, reference Bai in [0048] discloses, “The target classification model may alternatively include two models: a coarse-grained classification model and a fine-grained classification model”.
Summary of Citations (Bai)
Paragraph [0048]; “The target classification model may alternatively include two models: a coarse-grained classification model and a fine-grained classification model. The coarse-grained classification model may recognize 6 kinds of targets”.
Regarding claim 21, method claim 21 corresponds to apparatus claim 8. Therefore, the rejection analysis and motivation to combine of claim 8 is applicable to claim 21.
Regarding claim 22, method claim 22 corresponds to apparatus claim 9. Therefore, the rejection analysis and motivation to combine of claim 9 is applicable to claim 22.
Regarding claim 25, method claim 25 corresponds to apparatus claim 12. Therefore, the rejection analysis and motivation to combine of claim 12 is applicable to claim 25.
Regarding claim 26, method claim 26 corresponds to apparatus claim 13. Therefore, the rejection analysis and motivation to combine of claim 13 is applicable to claim 26.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to ZAID MUHAMMAD SALEH whose telephone number is (703)756-1684.
The examiner can normally be reached M-F 8 am - 5 pm ET. Examiner interviews are available
via telephone, in-person, and video conferencing using a USPTO supplied web-based
collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO
Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.If attempts to
reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be
reached on (571)272-7332. The fax phone number for the organization where this application or
proceeding is assigned is 571-273-8300. Information regarding the status of published or
unpublished applications may be obtained from Patent Center. Unpublished application
information in Patent Center is available to registered users. To file and manage patent
submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit
https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center
and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For
additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
If you would like assistance from a USPTO Customer Service Representative, call 800-786
9199 (IN USA OR CANADA) or 571-272-1000.
/ZAID MUHAMMAD SALEH/
Examiner, Art Unit 2668
01/02/2025
/VU LE/Supervisory Patent Examiner, Art Unit 2668