Prosecution Insights
Last updated: April 19, 2026
Application No. 19/214,418

ATTENTION-BASED FEATURE FOR OBJECT-ORIENTED GRANULAR NEIGHBOR SEARCH

Non-Final OA §102§103
Filed
May 21, 2025
Examiner
KIM, PAUL
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Raven Industries Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
797 granted / 1089 resolved
+18.2% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
25 currently pending
Career history
1114
Total Applications
across all art units

Statute-Specific Performance

§101
16.4%
-23.6% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
9.1%
-30.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1089 resolved cases

Office Action

§102 §103
DETAILED ACTION This Office action is responsive to the following communication: Application filed on 21 May 2025. Claim(s) 1-20 is/are pending and present for examination. Claim(s) 1, 11, and 20 is/are in independent form. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 16 December 2025 is being considered by the examiner. Drawings The drawings were received on 21 May 2025. These drawings are accepted. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 5, 7, 10, 11, 12, 14, 17, and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Chakraborty et al, U.S. Patent No. 11,947,590, filed on 15 September 2021, and issued on 2 April 2024. As per independent claims 1, 11, and 20, Chakraborty teaches: A computer-implemented method comprising: presenting a user interface, the user interface including a set of image similarity search options {See Chakraborty, Figure 4}; receiving a selected image similarity search option of the set of the image similarity search options, the selected image similarity search option associated with a type of image representation {See Chakraborty, column 4, lines 4-16, wherein this reads over “As shown, the figure depicts a CVS system 120 that is configured to interface with clients 110 to receive visual queries specifying a query image 160 and provide search results 196 indicating items that contain a view of the query image. In some embodiments, the client 110 may be a client computing device such as a personal computer or mobile device executing a web browser. The client 110 may be configured to display an interactive user interface (e.g. a graphical user interface or web interface) that allows a user of the device to perform the visual search. In some embodiments, the user interface may be provided as part of a web portal that allows users to browse and purchase products offered by an e-commerce service provider.”}; accessing an input query image file {See Chakraborty, column 5, lines 31-45, wherein this reads over “As shown, after the target image cache 150 is populated with feature vectors of target images in selected item categories, the CVS system may receive, via the client API gateway 122, a search request to search for items that include a query image 160. The query image 160 may be a logo, label, symbol, mark, or the like, that is displayed on the items.”}; generating an image representation of the input query image file according to the selected image similarity search option using a transformer model {See Chakraborty, column 5, line 65-column 6, line 3, wherein this reads over “As shown, in response to the search request from the client, the client API gateway 122 may issue a call 162 to the feature converter service 170. The feature converter service may employ one or more machine learning models 172 that are trained to convert the query image 160 into a feature vector to be used by the feature matching service 180.”}; querying an image representations database for image representations of a type that matches the type of image representation associated with the selected image similarity search option {See Chakraborty, column 6, lines 25-41, wherein this reads over “To perform the search, the feature matching service 180 will retrieve 184 the target image feature vectors 152a-c from the target image cache 150, and then execute the ML model 182 on individual ones of the target image feature vectors. In some embodiments, the feature matching service 180 will return results 186 that lists the target images that were matched to the query image 160. These results may be returned to the client API gateway 122 or, as shown, to a downstream component such as the search result personalization component 190 in this example. In some embodiments, execution within the feature matching service 180 may be parallelized over a pool of many compute nodes. In some embodiments, for a better user experience, the feature matching service 180 may return partial search results to the downstream components before the entire search is finished, and then subsequently update the match results with additional matches”}; filtering image representations resulting from the querying to a result set of image representations {See Chakraborty, column 6, lines 42-48, wherein this reads over “In some embodiments, the search result personalization component 190 is tasked with fetching the items corresponding to the list of matched target images and constructing the final search results 196 to be returned to the client. These search results will be personalized based on retrieved user features 192 of the user who issued the query. For example, certain items may be filtered out based on user preferences”}; and outputting a set of image files associated with the result set of image representations {See Chakraborty, column 6, lines 56-58, wherein this reads over “As shown in this example, the search results 196 constructed by the search result personalization component 190 are returned 194 to the client via the client API gateway 122.”}. As per dependent claims 2 and 12, Chakraborty teaches: The computer-implemented method of claim 1, further comprising: generating a set of feature vectors based on the set of image files {See Chakraborty, column 5, line 65-column 6, line 3, wherein this reads over “As shown, in response to the search request from the client, the client API gateway 122 may issue a call 162 to the feature converter service 170. The feature converter service may employ one or more machine learning models 172 that are trained to convert the query image 160 into a feature vector to be used by the feature matching service 180.”}; and training an object detection model based on the set of feature vectors {See Chakraborty, column 3, lines 22-24, wherein this reads over “The ML model is trained so that it can be generalized over a large population of previously-unseen query classes.”}. As per dependent claims 4 and 14, Chakraborty teaches: The computer-implemented method of claim 1, wherein the set of image similarity search options includes a class-based image similarity option, an attention-based image similarity option, and an object-specific image similarity option {See Chakraborty, column 11, line 58 – column 12, line 3, wherein this reads over “As shown, in this example, the GUI 400 also allows the user to specify other non-image-based search criteria 420. Some of these criteria may be based on text-based metadata associated with the items, such as the item description, rating, and price, etc. In particular, the additional search criteria may include the item categories to use for the search. As discussed, in some embodiments, the CVS system may use a cache population component 150 to predict these categories for the user. In some embodiments, if the user does not specify any item categories for the search, all item categories will be used. As shown, buttons 422, 424, and 426 are used to change the item categories, change the other search criteria, and execute the search, respectively”}. As per dependent claims 7 and 17, Chakraborty teaches: The computer-implemented method of claim 4, wherein the input query image file includes an identification of a subset of the input query image file and wherein the type of image representation is an object-specific representation {See Chakraborty, column 19, lines 51-62, wherein this reads over “In some embodiments, each object query will examine a different region of the encoded target image to determine if an instance of the query image is present in that region. As shown, these object queries 664 may be fed in parallel to the decoder 662, which uses cross-attention layers to look at the encoded target image 652 and infer the output embeddings for each of the object-queries 664. In some embodiments, the final representation of each object query 664 is independently decoded into box coordinates (e.g. output location 674) capturing the detected query image and class labels 672 indicating the inferred class of the query image.”}. As per dependent claim 10, Chakraborty teaches: The computer-implemented method of claim 7, wherein the identification of the subset of the input query image file is represented as coordinates {See Chakraborty, column 19, lines 51-62, wherein this reads over “In some embodiments, each object query will examine a different region of the encoded target image to determine if an instance of the query image is present in that region. As shown, these object queries 664 may be fed in parallel to the decoder 662, which uses cross-attention layers to look at the encoded target image 652 and infer the output embeddings for each of the object-queries 664. In some embodiments, the final representation of each object query 664 is independently decoded into box coordinates (e.g. output location 674) capturing the detected query image and class labels 672 indicating the inferred class of the query image.”}. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chakraborty, in view of Cui et al, USPGPUB No. 2026/0011001, filed on 11 September 2025, claiming priority to 6 March 2024, and published on 8 January 2026. As per dependent claims 3 and 13, Chakraborty, in combination with Cui, discloses: The computer-implemented method of claim 1, wherein filtering image representations resulting from the querying to the result set of image representations includes: calculating a cosine similarity value between the image representation of the input query image file and the image representations of the querying {See Chakraborty, column 19, lines 22-24, wherein this reads over “The final loss is a contrastive loss using cosine similarities between the tokens f.sub.0(q) and f.sub.0(t)”}; and filtering image representations that are below a threshold cosine similarity value {See Cui, [0367], wherein this reads over “In particular, when the decision similarity is less than the similarity reference threshold, the computing system 1000 may exclude the matching previously inspected image (PII) from the similarity image”}. Chakraborty is directed to the invention of a contextualized visual search. While Chakraborty discloses “calculating a cosine similarity,” Chakraborty fails to expressly disclose “filtering image representations that are below a threshold cosine similarity value.” Cui is directed to the invention of a system for judgment record monitoring for an outlier judgment guide. Specifically, Cui discloses the determining the similarity of images using thresholds wherein images below said thresholds are excluded (i.e., filtering below a threshold cosine similarity). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the instant application to improve the prior art of Chakraborty with that of Cui such that the cosine similarity value of Chakraborty may be utilized in determining whether to exclude an image according to the invention of Cui. One of ordinary skill in the art would have been motivated to make the aforementioned combination such that irrelevant images may be excluded from a resulting set. Allowable Subject Matter Claims 5, 6, 8, 9, 15, 16, 18, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL KIM whose telephone number is (571)272-2737. The examiner can normally be reached Monday-Friday, 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached on (571) 270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Paul Kim/ Primary Examiner Art Unit 2152 /PK/
Read full office action

Prosecution Timeline

May 21, 2025
Application Filed
Feb 20, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604129
TRUE WIRELESS DEVICE AND DUAL-MODE TRUE WIRELESS DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12602445
QUERY COMPLETION BASED ON LOCATION
2y 5m to grant Granted Apr 14, 2026
Patent 12591563
USING PERSISTENT MEMORY AND REMOTE DIRECT MEMORY ACCESS TO REDUCE WRITE LATENCY FOR DATABASE LOGGING
2y 5m to grant Granted Mar 31, 2026
Patent 12587806
SYSTEMS FOR USING AN AURICULAR DEVICE CONFIGURED WITH AN INDICATOR AND BEAMFORMER FILTER UNIT
2y 5m to grant Granted Mar 24, 2026
Patent 12585705
FEATURE SPACE MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
93%
With Interview (+19.8%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 1089 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month