Prosecution Insights
Last updated: April 19, 2026
Application No. 16/969,964

SEARCH SYSTEM, SEARCH METHOD, AND PROGRAM

Non-Final OA §103§DP
Filed
Aug 13, 2020
Examiner
ALGHAZZY, SHAMCY
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Rakuten Group Inc.
OA Round
5 (Non-Final)
48%
Grant Probability
Moderate
5-6
OA Rounds
3y 11m
To Grant
49%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
30 granted / 62 resolved
-6.6% vs TC avg
Minimal +1% lift
Without
With
+0.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
25 currently pending
Career history
87
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
39.3%
-0.7% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 62 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submissions filed on 09/29th/2025 have been entered. Response to Arguments Applicant’s arguments, see Remarks page 9, filed 09/29th/2025, with respect to claims 1-20 nonstatutory double patenting rejection have been fully considered and they are moot in light of the updated double patenting rejection below. Applicant’s arguments, see Remarks page 10-14, filed 09/29th/2025, with respect to claims 1-20 rejection under 35 USC §103 have been fully considered and are moot in light of the new rejection below. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, and 6-19 of copending Application No. 16971292. Although the claims at issue are not identical, they are not patentably distinct from each other and are obvious variations of one another because the claims of both applications are functionally identical but rearranged in a different order and with only slight modifications that do not significantly alter the scope of the claim(s). Both applications are directed towards natural language processing using neural network classifiers. One of ordinary skill in the art would conclude, after a cursory examination of the claims, that the two claimed inventions are obvious variants of each other. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application Application Number 16971292 Claim 1 Claims 1 & 5 & 8 Claim 2 Claim 6 Claim 3 Claim 7 Claim 4 Claim 8 Claim 5 Claim 9 Claim 6 Claim 10 Claim 7 Claim 11 Claim 8 Claim 12 Claim 9 Claim 13 Claim 10 Claim 14 Claim 11 Claim 15 Claim 12 Claim 16 Claim 13 Claim 17 Claim 14 Claim18 It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Application 16971292 to implement the dependent system claims, such as claim 5, in a method embodiment. Claim 15 Claim 19 It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Application 16971292 to implement the dependent system claims, such as claim 5, in a non-transitory storage embodiment. Claim 16 Claim 7 Claim 17 Claim 5 Claim 18 Application Number 16971292 fails to particularly teach wherein each classification among the plurality of classifications has an associated threshold value; and wherein associated threshold values between at least two classifications among the plurality of classifications are different. However, Winfield teaches wherein each classification among the plurality of classifications has an associated threshold value; and wherein associated threshold values between at least two classifications among the plurality of classifications are different ([Col. 16, Line 31-34] What is significant is that each established Classification Range is assigned a unique corresponding classification value distinguishing it from the remaining Classification Ranges. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified application 16971292 to incorporate wherein each classification among the plurality of classifications has an associated threshold value; and wherein associated threshold values between at least two classifications among the plurality of classifications are different as taught by Winfield [Col. 16, Line 31-34] so that individual movies comprising the previously released movies of the dataset can be easily grouped by Classification Range in accordance with their respective assigned classification codes [Col. 16, Line 34-37]). Claim 19 Claim 1 & Claim 6 Claim 20 Claim 1 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-8, 11-12, and 14-15, 17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Li (An Improved Faster R-CNN for Same Object Retrieval - 2017), in view of Cox (US20040225865A1), further in view of Sugaya (US20190289075A1), further in view of ICHIMURA (US20050143999A1). Regarding claim 1, Li teaches A search system comprising: a learner implemented by a machine learning algorithm and executed on at least one processor in communication with a memory, that calculates a feature quantity of information that is input and outputs a classification result of the information based on the feature quantity ([Section III, Fig. 2-3] The examiner notes that Li teaches a trained SOR Faster R-CNN neural network model that inputs images, extracts feature maps, and uses the feature maps to generate classifications). at least one processor configured to: store at least one of a feature quantity or a classification result of information to be searched, which has been input in the learner, in a database corresponding to a classification of the information to be searched among a plurality of databases prepared for respective classifications ([Section III, Fig. 2-3] The examiner notes that Li teaches that the images of the Oxford dataset are input into the neural network and their class scores are compared with that of the query image, and the feature vectors of the dataset of images are also compared with the feature vector of the query image to identify a nearest cosine difference; such comparison inherently requires that the class score and feature vector of each image in the dataset of images are both stored. The examiner further considers any coarse set of images, as taught by Li [ Page 13669, Para. 1] to be a unique database since images containing object proposals are first collected into a coarse set if their confidence scores are similar to the query object proposal and those coarse sets are searched for images that match the query image). input input information in the learner and obtain a classification result indicating a classification of the input information that is output from the learner; wherein the classification result indicates a probability of each classification among a plurality of classifications ([Page 13669, Para. 1] 1) A query image and a candidate image are given as input to the ZF model. 2) The conv3 and conv5 of the ZF model are L2 normalized and concatenated. 3) The normalized result is given as input to RPN. 4) RPN produces the RPN region proposal. 5) The RPN proposal is given as input to the concatenated layer. 6) The features of the RPN proposal are given as input to the RoI pooling layer. 7) The result of the RoI pooling layer is given as input to the FC layers. 8) A classification name and a bounding box with a confidence score are generated via regression. 9) Coarse set selection: the top 10 images that contain object proposals with the closest confidence scores to the query object proposal are selected as the coarse set. 10) Ranking by cosine distance: the image that has the nearest cosine distance to the query image is selected as the query object. The examiner notes that Li teaches inputting a query image to a trained model (Step 1), obtaining a classification result indicating a classification of the input as well as a probability of the classification (Step 8). However, Li is not relied upon to explicitly teach wherein each database in the plurality of databases contains one or more unique pieces of information that are not contained in any of the other databases. Li is also not explicitly relied upon to teach Wherein each database of the plurality of databases contains only a single classification of information, Li is also not relied upon to explicitly teach select a corresponding database, from among the plurality of databases, for each classification in the classification result having a probability above a threshold value; and search for information that is similar to the input information in at least one of the feature quantity or the classification result in the corresponding database. On the other hand, Cox teaches wherein each database in the plurality of databases contains one or more unique pieces of information that are not contained in any of the other databases ([0050] Typically, each of the many databases 104a and 104b contain unique data, although there may be some redundancy in the databases or even redundant databases. Each of the databases 104a and 104b has an associated database index 116 stored in the index engines 110. The examiner notes that Li and Cox are both considered to be analogous because they are in the same field of object retrieval systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate wherein each database in the plurality of databases contains one or more unique pieces of information that are not contained in any of the other databases as taught by Cox [0050] so that the data in existing databases 104a and 104b may be tied together in a transparent fashion, such that for the end user the access to data is both business and workflow transparent. [0052]). Furthermore, Sugaya teaches Wherein each database of the plurality of databases contains only a single classification of information ([0065] Furthermore, instead of classifying all combination types with one database, one database may exist for each combination type. That is, the same number of databases as the number of combination types may exist. The examiner notes that Sugaya teaches a plurality of databases where one database exists for each classification of information. The examiner further notes that Li and Sugaya are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate Wherein each database of the plurality of databases contains only a single classification of information as taught by Sugaya [0065] to make the determining of the combination type and the combination of the edge devices by specifying the type name corresponding to the edge devices included in the device data received more efficient [0066-0067]). Furthermore, ICHIMURA teaches select a corresponding database, from among the plurality of databases, for each classification in the classification result having a probability above a threshold value; and search for information that is similar to the input information in at least one of the feature quantity or the classification result in the corresponding database ([0040-0041] Referring back to FIG. 2, the procedure of the question answering process will be explained in detail again. In step S203, whether the maximum value of the posteriori probability P(WIY) is a threshold value or more is determined. If the maximum value of the posteriori probability P(WIY), i.e., the speech recognition accuracy evaluation value is equal to or greater than the threshold value, the determination unit 114 selects the text database as a database to be searched, and the flow advances to step S206. If the speech recognition accuracy evaluation value is less than the threshold value, the determination unit 114 selects the speech database as a database to be searched, and the flow advances to step S204. In step S204, the retriever 113 searches the speech database 111 by the question speech by using the speech feature parameter time series Y, without using the speech recognition result. The examiner notes that ICHIMURA teaches selecting a database from a plurality of databases based on an accuracy value being above a predetermined threshold and searching the selected database based on a feature quantity. The examiner further notes that Li and ICHIMURA are both considered to be analogous because they are in the same field of searching data. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate select a corresponding database, from among the plurality of databases, for each classification in the classification result having a probability above a threshold value; and search for information that is similar to the input information in at least one of the feature quantity or the classification result in the corresponding database as taught by ICHIMURA [0040-0041] to facilitate a question answering process [0040-0041]). Regarding claim 2, Li teaches The search system according to claim 1, wherein the learner calculates a feature vector as the feature quantity, and the at least one processor performs the search based on a distance between a feature vector of information to be searched, which is stored in a database corresponding to the classification result of the input information, and a feature vector of the input information ([Section III, Fig. 2-3] The examiner notes that Li teaches calculating feature vectors of the feature maps of the images and ranking candidate images according to the cosine distance between their feature vectors and the feature vector of the query image). Regarding claim 3, Li teaches The search system according to claim 1 wherein the at least one processor stores at least one of the feature quantity or the classification result of the information to be searched in the database corresponding to the classification result of the information to be searched that is output from the learner ([Section III, Fig. 2-3] The examiner notes that Li teaches that the images of the Oxford dataset are input into a neural network which generates feature vectors based on calculated feature maps, and their class scores are compared with that of the query image, and the feature vectors of the dataset of images are also compared with the feature vector of the query image to identify a nearest cosine difference; such calculations and comparisons inherently require that the images and the class score and feature vector of each image are stored. The examiner interprets such database to be the claimed database corresponding to the classification result of the information to be searched that is output from the learner). Regarding claim 4, Li teaches The search system according to claim 3, wherein the at least one processor stores at least one of the feature quantity or the classification result of the information to be searched in a database of a classification having a probability of the information to be searched, which is output from the learner, wherein the probability is equal to or more than the threshold value ([Section III, Fig. 2] The examiner notes that Li discloses that the neural network outputs a “Class Score” for each image, and the class scores of the database images are later compared with that of the query image; it is inherent that such comparison requires that the class score of each image in the dataset of images is stored in conjunction with that image; and the examiner interprets those scores to be the claimed probabilities. Li also teaches that an object proposal is a potential object if the class score exceeds a class score threshold of 0.8 [Page 13670, Section III.D.1]). Regarding claim 5, Li teaches The search system according to claim 1, wherein the at least one processor performs the search based on a database of a classification having a probability of the input information, which is output from the learner, wherein the probability is equal to or more than the threshold value ([Section III, Fig. 2] The examiner notes that Li discloses that the neural network outputs a “Class Score” for each image, and the class scores of the database images are compared with that of the query image. Li also teaches that an object proposal is a potential object if the class score exceeds a class score threshold of 0.8 [Page 13670, Section III.D.1]). Regarding claim 7, Li teaches The search system according to claim 1, wherein the at least one processor: obtains a similarity based on at least one of the feature quantity or the classification result of the input information and at least one of the feature quantity or the classification result of the information to be searched, and displays the similarity in association with the information to be searched ([Section III] Li teaches calculating a cosine distance between feature vectors of each of the candidate images and the feature vector of the query image; Figs. 6-7 show that the similarity score is displayed on the bounding box in the image). Regarding claim 8, Li teaches The search system according to claim 1, wherein the learner calculates a feature quantity of an image that is input and outputs a classification result of an object included in the image, the information to be searched is an image to be searched, the input information is an input image, and the at least one processor searches for an image to be searched that is similar to the input image in at least one of the feature quantity or the classification result ([Section III, Figs. 2-3] The examiner notes that Li discloses a trained neural network that calculates feature maps for a query image and generates region proposals of objects in the query image in the form of bounding boxes and classification scores for the objects in the bounding boxes based on the calculated feature maps. Li further teaches calculating a cosine distance between feature vectors of each of the candidate images and the feature vector of the query image to find the closest match to the query image). Regarding claim 11, Li teaches The search system according to claim 8, wherein the learner outputs a classification result of an object included in the image that is input and position information about a position of the object, and the at least one processor displays the position information of the image to be searched in association with the image to be searched ([Section III, Figs. 2-3] The examiner notes that Li teaches a trained neural network that outputs bounding boxes and classification scores for the objects in the bounding boxes based on the calculated features. Li also teaches displaying the bounding boxes of the dataset images and the query image [Fig. 4-7]). Regarding claim 12, Li teaches The search system according to claim 1, wherein the learner outputs a classification result of an object included in the image that is input and position information about a position of the object, and the at least one processor displays the position information of the input image in association with the input image ([Section III, Figs. 2-3] The examiner notes that Li teaches a trained neural network that outputs bounding boxes and classification scores for the objects in the bounding boxes based on the calculated features. Li also teaches displaying the bounding boxes of the dataset images and the query image [Fig. 4-7]). Claim 14 is rejected based upon the same rationale as the rejection of claim 1 since it is the method claim corresponding to the system claim. Claim 15 is rejected based upon the same rationale as the rejection of claim 1 since it is the non-transitory computer-readable storage medium claim corresponding to the system claim. Regarding claim 17, Li teaches The search system according to claim 1. However, Li is not relied upon to explicitly teach wherein each of a number of the plurality of databases is equal to a number of classifications in the learner. On the other hand, Sugaya teaches wherein each of a number of the plurality of databases is equal to a number of classifications in the learner ([0065] Furthermore, instead of classifying all combination types with one database, one database may exist for each combination type. That is, the same number of databases as the number of combination types may exist. The examiner notes that Sugaya teaches a plurality of databases where the number of databases equals the number of classes of information. The examiner further notes that Li and Sugaya are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate wherein each of a number of the plurality of databases is equal to a number of classifications in the learner as taught by Sugaya [0065] to make the determining of the combination type and the combination of the edge devices by specifying the type name corresponding to the edge devices included in the device data received more efficient [0066-0067]). Regarding claim 19, Li teaches wherein the learner calculates the probability based on an image feature vector; and search in the corresponding database based on a distance calculated based on the image feature vector ([Page 211, Section E] KNN (short for K Nearest Neighbor) method calculates the distance (Euler Distance for example) between its feature vector and other feature vectors and finds the K nearest ones). Regarding claim 20, Li teaches: wherein the input information is input into the learner ([Page 209, Para. 2] After that trained learning model takes the feature vector as input). after the at least one processor stores the at least one of the feature quantity or the classification result of information ([Page 209, Para. 1] Then feature extractor is invoked to analyze the structure of these trees, count each feature values as the feature vector and store them for further training). to be searched in the corresponding database corresponding to the classification of the information ([Page 209, Para. 6] Genetic Algorithm [16] is a heuristic search algorithm which mimics the process of natural selection. In this method a population of candidate solutions(also named individuals) to an optimization problem are evolved toward better solutions. In this feature selection scenario, in order to find the most effective N features rapidly we make a little difference which evolves new individuals with exactly N features each time). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Li (An Improved Faster R-CNN for Same Object Retrieval - 2017), in view of Cox (US20040225865A1), further in view of Sugaya (US20190289075A1), further in view of ICHIMURA (US20050143999A1), further in view of Lin (Deep Learning of Binary Hash Codes for Fast Image Retrieval – 2015) Regarding claim 6, Li teaches The search system according to claim 1. However, Li is not relied upon to explicitly teach wherein in a case where there are a plurality of databases that correspond to the classification result of the input information, based on each of the plurality of databases, the at least one processor searches for candidates of information to be searched that is similar to the input information in at least one of the feature quantity or the classification result, and narrows down the candidates. On the other hand, Lin teaches wherein in a case where there are a plurality of databases that correspond to the classification result of the input information, based on each of the plurality of databases, the at least one processor searches for candidates of information to be searched that is similar to the input information in at least one of the feature quantity or the classification result, and narrows down the candidates ([Page 29, Figure 1, Module 3] The examiner notes that Lin discloses that multiple binary codes may be extracted from the query image and that a pool of candidates are identified as those having similar binary codes. Lin further performs fine-level search narrowing the candidates by calculating a Euclidian distance between feature vectors of the candidate images and the feature vectors of the query image [Page 30, Para. 4]. The examiner notes that Li and Lin are both considered to be analogous because they are in the same field of object retrieval systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate wherein in a case where there are a plurality of databases that correspond to the classification result of the input information, based on each of the plurality of databases, the at least one processor searches for candidates of information to be searched that is similar to the input information in at least one of the feature quantity or the classification result, and narrows down the candidates as taught by Lin [Page 29, Figure 1, Module 3] to simultaneously learn image representations and binary codes to make the make searching images more efficient [Page 27, Introduction]). Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Li (An Improved Faster R-CNN for Same Object Retrieval - 2017), in view of Cox (US20040225865A1), further in view of Sugaya (US20190289075A1), further in view of ICHIMURA (US20050143999A1), further in view of Lin (Deep Learning of Binary Hash Codes for Fast Image Retrieval – 2015), further in view of Yao (US20180018524A1) Regarding claim 9, Li teaches The search system according to claim 8 wherein the learner calculates a feature quantity of an area indicating the object included in the input image and outputs a classification result of the area ([Section III, Figs. 2-3] The examiner notes that Li discloses a trained neural network that calculates feature maps for a query image and generates region proposals of objects in the query image in the form of bounding boxes and classification scores for the objects in the bounding boxes based on the calculated feature maps). However, Li is not relied upon to explicitly teach in a case where a plurality of areas overlapping with one another are included in the input image, the learner outputs a classification result of an area having a highest probability based on a feature quantity of the area. On the other hand, Yao teaches in a case where a plurality of areas overlapping with one another are included in the input image, the learner outputs a classification result of an area having a highest probability based on a feature quantity of the area ([0043] In one embodiment, the post-processing includes two main operations: (1) Non-Maximum Suppression (NMS) and (2) Bounding Box Regression (BBR), which are well-known in the art. NMS and BBR are two common techniques popularly used in object detection. In one embodiment, the pedestrian detection system, a set of initial bounding boxes is obtained by setting a threshold to the multi-scale heat maps. That is, in one embodiment, only bounding boxes with probability/classification scores larger than a fixed threshold are considered as initial bounding boxes, i.e., candidates. This is based on training results, e.g., a box with classification score>0.5, is considered a potential pedestrian instance. However, many of those object boxes can be overlapped. With NMS, the object boxes are first sorted to create a list with descending classification/probability scores, where each box only has one unique classification score. Then, the overlap rates between the object box with the highest score and the other object boxes are computed, and those highly overlapped boxes (e.g., >0.5 overlap) with lower scores as disclosed. In one embodiment, the object box with the highest score is taken as reference, and any other object boxes that have overlaps>0.5 with the reference box are all discarded, as their scores are lower than that of the reference box. Finally, this object box with the highest score is saved as the first result box. This procedure is iteratively run on the remained object boxes until no more object box can be found. As a result, only a small number of object boxes are finally obtained. The examiner notes that Yao generates bounding boxes that overlap for a certain object, and that a post processing method called Non-Maximum Suppression (NMS) is used to select the object box with the highest probability to be the reference box from among a plurality of overlapping object boxes. The examiner notes that Li and Yao are both considered to be analogous because they are in the same field of object retrieval systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate in a case where a plurality of areas overlapping with one another are included in the input image, the learner outputs a classification result of an area having a highest probability based on a feature quantity of the area as taught by Yao [0043] to obtain more accurate detection results with least number of object boxes [0042]). Regarding claim 10, Li teaches The search system according to claim 8, wherein the at least one processor stores, in a database, at least one of the feature quantity or the classification result of the area indicating the object included in the image to be searched ([Section III, Fig. 2-3] The examiner notes that Li teaches that the images of the Oxford dataset are input into the neural network and their class scores are compared with that of the query image, and the feature vectors of the dataset of images are also compared with the feature vector of the query image to identify a nearest cosine difference; such comparison inherently requires that the class score and feature vector of each image in the dataset of images are both stored). However, Li is not relied upon to explicitly teach in a case where a plurality of areas overlapping with one another are included in the image to be searched, the at least one processor stores at least one of the feature quantity and the classification result of the area having a highest probability of the classification result. On the other hand, Yao teaches in a case where a plurality of areas overlapping with one another are included in the image to be searched, the at least one processor stores at least one of the feature quantity and the classification result of the area having a highest probability of the classification result. ([0043] In one embodiment, the post-processing includes two main operations: (1) Non-Maximum Suppression (NMS) and (2) Bounding Box Regression (BBR), which are well-known in the art. NMS and BBR are two common techniques popularly used in object detection. In one embodiment, the pedestrian detection system, a set of initial bounding boxes is obtained by setting a threshold to the multi-scale heat maps. That is, in one embodiment, only bounding boxes with probability/classification scores larger than a fixed threshold are considered as initial bounding boxes, i.e., candidates. This is based on training results, e.g., a box with classification score>0.5, is considered a potential pedestrian instance. However, many of those object boxes can be overlapped. With NMS, the object boxes are first sorted to create a list with descending classification/probability scores, where each box only has one unique classification score. Then, the overlap rates between the object box with the highest score and the other object boxes are computed, and those highly overlapped boxes (e.g., >0.5 overlap) with lower scores as disclosed. In one embodiment, the object box with the highest score is taken as reference, and any other object boxes that have overlaps>0.5 with the reference box are all discarded, as their scores are lower than that of the reference box. Finally, this object box with the highest score is saved as the first result box. This procedure is iteratively run on the remained object boxes until no more object box can be found. As a result, only a small number of object boxes are finally obtained. The examiner notes that Yao generates bounding boxes that overlap for a certain object, and that a post processing method called Non-Maximum Suppression (NMS) is used to select the object box with the highest probability to be the reference box from among a plurality of overlapping object boxes. The examiner notes that the ranking, sorting, and selection of the object box with the highest probability inherently requires storing such box to memory at one point. The examiner further notes that Li and Yao are both considered to be analogous because they are in the same field of object retrieval systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate in a case where a plurality of areas overlapping with one another are included in the input image, the learner outputs a classification result of an area having a highest probability based on a feature quantity of the area as taught by Yao [0043] to obtain more accurate detection results with least number of object boxes [0042]). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Li (An Improved Faster R-CNN for Same Object Retrieval - 2017), in view of Cox (US20040225865A1), further in view of Sugaya (US20190289075A1), further in view of ICHIMURA (US20050143999A1), further in view of Lin (Deep Learning of Binary Hash Codes for Fast Image Retrieval – 2015), further in view of Ren (Faster R-CNN Towards Real-Time Object Detection with Region Proposal Networks – 2015) Regarding claim 13, Li teaches The search system according to claim 8. However, Li is not relied upon to explicitly teach wherein in a case where a plurality of objects are included in the image that is input, the learner calculates a feature quantity and outputs a classification result for each object, each of the input image and the image to be searched includes a plurality of objects, and the at least one processor searches for an image to be searched that is similar to the input image in at least one of the feature quantity or the classification result of some of the objects On the other hand, Ren teaches wherein in a case where a plurality of objects are included in the image that is input, the learner calculates a feature quantity and outputs a classification result for each object, each of the input image and the image to be searched includes a plurality of objects, and the at least one processor searches for an image to be searched that is similar to the input image in at least one of the feature quantity or the classification result of some of the objects ([Page 2, Section 3] A Region Proposal Network (RPN) takes an image (of any size) as input and outputs a set of rectangular object proposals, each with an objectness score. We model this process with a fully-convolutional network [14], which we describe in this section. Because our ultimate goal is to share computation with a Fast R-CNN object detection network [5], we assume that both nets share a common set of conv layers. In our experiments, we investigate the Zeiler and Fergus model [23] (ZF), which has 5 shareable conv layers and the Simonyan and Zisserman model [19] (VGG), which has 13 shareable conv layers. To generate region proposals, we slide a small network over the conv feature map output by the last shared conv layer. The examiner notes that Ren teaches calculating features maps of input images containing multiple objects and outputting anchor boxes, coordinates, and confidence scores for those objects within the image. The examiner further notes that Li and Ren are both considered to be analogous because they are in the same field of object retrieval systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate wherein in a case where a plurality of objects are included in the image that is input, the learner calculates a feature quantity and outputs a classification result for each object, each of the input image and the image to be searched includes a plurality of objects, and the at least one processor searches for an image to be searched that is similar to the input image in at least one of the feature quantity or the classification result of some of the objects as taught by Ren [Page 2, Section 3] to enhance search accuracy and speed [Page 2, Section 2]). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Li (An Improved Faster R-CNN for Same Object Retrieval - 2017), in view of Cox (US20040225865A1), further in view of Sugaya (US20190289075A1), further in view of ICHIMURA (US20050143999A1), further in view of Homma (US20110176725A1). Regarding claim 16, Li teaches The search system according to claim 1. However, Li is not relied upon to explicitly teach wherein each of the plurality of databases store images. On the other hand, Homma teaches wherein each of the plurality of databases store images ([0056] The image storing section 23 includes a plurality of image databases which store images. The examiner notes that Li and Homma are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate wherein each of the plurality of databases store images to generate a discriminator with high accuracy for discriminating a predetermined discrimination target [0014]). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Li (An Improved Faster R-CNN for Same Object Retrieval - 2017), in view of Cox (US20040225865A1), further in view of Sugaya (US20190289075A1), further in view of ICHIMURA (US20050143999A1), further in view of Winfield (US10748215). Regarding claim 18, Li teaches The search system according to claim 1. However, Li is not relied upon to explicitly teach wherein each classification among the plurality of classifications has an associated threshold value; and wherein associated threshold values between at least two classifications among the plurality of classifications are different. On the other hand, Winfield teaches wherein each classification among the plurality of classifications has an associated threshold value; and wherein associated threshold values between at least two classifications among the plurality of classifications are different ([Col. 16, Line 31-34] What is significant is that each established Classification Range is assigned a unique corresponding classification value distinguishing it from the remaining Classification Ranges. The examiner notes that Li and Winfield are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s search method to incorporate wherein each classification among the plurality of classifications has an associated threshold value; and wherein associated threshold values between at least two classifications among the plurality of classifications are different as taught by Winfield [Col. 16, Line 31-34] so that individual movies comprising the previously released movies of the dataset can be easily grouped by Classification Range in accordance with their respective assigned classification codes [Col. 16, Line 34-37]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. LEE (US20190188539A1) “LEE teaches a storage configured to store a plurality of filters each corresponding to a plurality of image patterns; and a processor configured to classify an image block including a target pixel and a plurality of surrounding pixels into one of the plurality of image patterns based on a relationship between pixels within the image block and to obtain a final image block in which the target pixel is image-processed” KARUBE (US20180293255A1) “KARUBE teaches a similar damage search device which includes a database that stores first damage information generated on the basis of a damage image of a structure, the first damage information including a damage vector obtained by vectorizing damage of the structure, and damage structure information including at least one of information on a hierarchical structure of the damage vector or information on a direction of the damage vector” Gokalp (US 2016/0063394 Al) “Gokalp teaches a method for training and improving computer-implemented data classification” Harz (US 2014/0201113 Al) “Harz teaches a method for automatic genre determination of web content” Eder (US 2009/0043637 Al) “Eder teaches a method for creating an organization risk matrix and an organization value matrix to support the management and optimization of one or more aspects of organization risk and value” Wold (US 9,641,680 B1) “Wold teaches a method for cross-linking events and persons using anonymized voice fingerprint identifiers and call metadata” Shih (US 2019/0129989 Al) “”Shih teaches an automated data configuration engine that parses unique files to extract portions of those files corresponding to unique identifiers” Filgueiras (US10402448B2) “Filgueiras teaches machine-learned image descriptor models for image retrieval” Moura (US2020/0118423A1) “Moura teaches ANNs to estimate flow of objects in one or more scenes each captured in one or more images” Chen (US10423850B2) “Chen discloses a method of detecting infected objects from large field-of-view images” Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAMCY ALGHAZZY whose telephone number is (571)272-8824. The examiner can normally be reached on M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, OMAR FERNANDEZ RIVAS can be reached on (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAMCY ALGHAZZY/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Aug 13, 2020
Application Filed
Sep 26, 2023
Non-Final Rejection — §103, §DP
Dec 05, 2023
Interview Requested
Dec 11, 2023
Applicant Interview (Telephonic)
Dec 11, 2023
Examiner Interview Summary
Dec 30, 2023
Response Filed
Apr 04, 2024
Final Rejection — §103, §DP
Jun 19, 2024
Interview Requested
Jul 15, 2024
Response after Non-Final Action
Jul 24, 2024
Examiner Interview (Telephonic)
Jul 24, 2024
Response after Non-Final Action
Aug 14, 2024
Request for Continued Examination
Aug 17, 2024
Response after Non-Final Action
Oct 31, 2024
Non-Final Rejection — §103, §DP
Jan 31, 2025
Interview Requested
Feb 11, 2025
Examiner Interview Summary
Feb 11, 2025
Applicant Interview (Telephonic)
Mar 12, 2025
Response Filed
Jun 25, 2025
Final Rejection — §103, §DP
Sep 09, 2025
Interview Requested
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Sep 29, 2025
Request for Continued Examination
Oct 06, 2025
Response after Non-Final Action
Feb 13, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596925
SINGLE-STAGE MODEL TRAINING FOR NEURAL ARCHITECTURE SEARCH
2y 5m to grant Granted Apr 07, 2026
Patent 12596922
ACCELERATING NEURAL NETWORKS IN HARDWARE USING INTERCONNECTED CROSSBARS
2y 5m to grant Granted Apr 07, 2026
Patent 12579408
ADAPTIVELY TRAINING OF NEURAL NETWORKS VIA AN INTELLIGENT LEARNING MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572847
SYSTEMS AND METHODS FOR RESOURCE-AWARE MODEL RECALIBRATION
2y 5m to grant Granted Mar 10, 2026
Patent 12566966
TRAINING ADAPTABLE NEURAL NETWORKS BASED ON EVOLVABILITY SEARCH
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
48%
Grant Probability
49%
With Interview (+0.7%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 62 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month