Prosecution Insights
Last updated: April 19, 2026
Application No. 19/205,102

TECHNIQUES FOR ENHANCED SEARCHES

Non-Final OA §103
Filed
May 12, 2025
Examiner
MINCEY, JERMAINE A
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
4y 5m
To Grant
98%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
276 granted / 492 resolved
+1.1% vs TC avg
Strong +42% interview lift
Without
With
+41.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
35 currently pending
Career history
527
Total Applications
across all art units

Statute-Specific Performance

§101
23.8%
-16.2% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
3.4%
-36.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 492 resolved cases

Office Action

§103
DETAILED ACTION 1. This is a Non-Final Office Action Correspondence in response to U.S. Application No. 19/205102 filed on May 12, 2025. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 6 and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claim(s) 1, 2, 8, 9, 15 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. U.S. Patent Application Publication No. 2024/0202230 (herein as ‘Li’) and further in view of Shibata et al. U.S. Patent Application Publication No. 2012/0254168 (herein as ‘Shibata’). As to claim 1 Li teaches a method comprising: receiving, by an application of a user device, a query associated with a corpus of image files (Par. 0035 Li discloses conducting a first image search that is associated with candidate images); wherein each image file comprises metadata and an embedding, and wherein the embedding represents one or more visual characteristics of the image file (Par. 0031 Li discloses the query image includes a representation of objects that are not query objects. These objects that are not query objects are seen as embedding. Par. 0032 Li discloses the system recognizes text strings on the query image); generating, by the application of the user device, a first feature vector for at least a portion of the query, the first feature vector representing one or more textual characteristics of the query (Par. 0033 Li discloses generating a query vector from the query image. Par. 0032 Li discloses the generated query vector is from one or more text strings obtain); providing, by the application of the user device, the first feature vector as input to a query understanding model that is trained to semantically parse the first feature vector to identify at least one of one or more entities, one or more locations, one or more actions, or a timeframe (Par. 0035 Li discloses providing the query vector to the first image search, which includes applying the model to the candidate images to generate first candidate vectors. Par. 0049 Li discloses the characteristics of the query object such as the type of object, type of place. The type of place is seen as a location, the type of object is seen as the entities); Li teaches generating, by the application of the user device, a second feature vector for the revised query (Par. 0037 Li discloses conducting a second search based on the text strings. The first query is the query based upon the query image. The revised query is seen as the second query that contains text strings of the image); providing, by the application of the user device, the second feature vector as input to a semantic search model that is trained to compare the second feature vector and the embedding for each image file in the corpus of image files to identify one or more preliminary image files of the corpus of image files (Fig. 1 (135) and Par. 0037 Li discloses using the query vector from the first search to select candidate images produced by the second search of text string. Par. 0038-0039 Li discloses using the computing model between the second candidate vector and the first search query vector. The computing model is seen as the system that compares the first search query vector to the second search of text strings); and receiving, by the application of the user device, the one or more preliminary image files as output from the semantic search model (Par. 0040 Li discloses the output as the images). Li does not teach but Shibata teaches producing, by the application of the user device, a revised query from the query by removing the one or more locations and the timeframe from the query (Par. 0375 Shibata discloses removing the data and time information from the data that is used by the searching section. Par. 0414 Shibata discloses removing the location information from the data that is used by the searching section); Li and Shibata are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shibata, to allow for accessing content such as image information while listening to sound information (Par. 0011-0014 Shibata). As to claim 2 Li in combination with Shibata teaches each and every limitation of claim 1. In addition Li teaches wherein providing the first feature vector as input to the query understanding model further comprises: classifying, by the application of the user device, the first feature vector as a plain language query or a semantic query (Par. 0033 Li discloses a first machine-learning model trained using a first set of training data and designed to identify a first type); and responsive to classifying the first feature vector as the semantic query, providing, by the application of the user device, the first feature vector as input to the query understanding model (Par. 0033 Li discloses placing the query feature vector into the machine learning model to identify a first type of object). As to claim 8 Li teaches a computing device, comprising: one or more memories; and one or more processors in communication with the one or more memories and configured to execute instructions stored in the one or more memories to performing operations to: receive, by an application of the computing device, a query associated with a corpus of image files (Par. 0035 Li discloses conducting a first image search that is associated with candidate images); wherein each image file comprises metadata and an embedding, and wherein the embedding represents one or more visual characteristics of the image file (Par. 0031 Li discloses the query image includes a representation of objects that are not query objects. These objects that are not query objects are seen as embedding. Par. 0032 Li discloses the system recognizes text strings on the query image); generate, by the application of the computing device, a first feature vector for at least a portion of the query, the first feature vector representing one or more textual characteristics of the query (Par. 0033 Li discloses generating a query vector from the query image. Par. 0032 Li discloses the generated query vector is from one or more text strings obtain); provide, by the application of the computing device, the first feature vector as input to a query understanding model that is trained to semantically parse the first feature vector to identify at least one of one or more entities, one or more locations,one or more actions, or a timeframe (Par. 0035 Li discloses providing the query vector to the first image search, which includes applying the model to the candidate images to generate first candidate vectors. Par. 0049 Li discloses the characteristics of the query object such as the type of object, type of place. The type of place is seen as a location, the type of object is seen as the entities); Li teaches generate, by the application of the computing device, a second feature vector for the revised query (Par. 0037 Li discloses conducting a second search based on the text strings. The first query is the query based upon the query image. The revised query is seen as the second query that contains text strings of the image); provide, by the application of the computing device, the second feature vector as input to a semantic search model that is trained to compare the second feature vector and the embedding for each image file in the corpus of image files to identify one or more preliminary image files of the corpus of image files (Fig. 1 (135) and Par. 0037 Li discloses using the query vector from the first search to select candidate images produced by the second search of text string. Par. 0038-0039 Li discloses using the computing model between the second candidate vector and the first search query vector. The computing model is seen as the system that compares the first search query vector to the second search of text strings); and receive, by the application of the computing device, the one or more preliminary image files as output from the semantic search model (Par. 0040 Li discloses the output as the images). Li does not teach but Shibata teaches produce, by the application of the computing device, a revised query from the query by removing the one or more locations and the timeframe from the query (Par. 0375 Shibata discloses removing the data and time information from the data that is used by the searching section. Par. 0414 Shibata discloses removing the location information from the data that is used by the searching section); Li and Shibata are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shibata, to allow for accessing content such as image information while listening to sound information (Par. 0011-0014 Shibata). As to claim 9 Li in combination with Shibata teaches each and every limitation of claim 8. In addition Li teaches wherein providing the first feature vector as input to the query understanding model further comprises operations to: classify, by the application of the computing device, the first feature vector as a plain language query or a semantic query (Par. 0033 Li discloses a first machine-learning model trained using a first set of training data and designed to identify a first type); and responsive to classifying the first feature vector as the semantic query, provide, by the application of the computing device, the first feature vector as input to the query understanding model (Par. 0033 Li discloses placing the query feature vector into the machine learning model to identify a first type of object). As to claim 15 Li teaches a non-transitory computer-readable medium storing a plurality of instructions that, when executed by one or more processors of a computing device, cause the one or more processors to perform operations to: receive, by an application of the computing device, a query associated with a corpus of image files (Par. 0035 Li discloses conducting a first image search that is associated with candidate images); wherein each image file comprises metadata and an embedding, and wherein the embedding represents one or more visual characteristics of the image file (Par. 0031 Li discloses the query image includes a representation of objects that are not query objects. These objects that are not query objects are seen as embedding. Par. 0032 Li discloses the system recognizes text strings on the query image); generate, by the application of the computing device, a first feature vector for at least a portion of the query, the first feature vector representing one or more textual characteristics of the query (Par. 0033 Li discloses generating a query vector from the query image. Par. 0032 Li discloses the generated query vector is from one or more text strings obtain); provide, by the application of the computing device, the first feature vector as input to a query understanding model that is trained to semantically parse the first feature vector to identify at least one of one or more entities, one or more locations,one or more actions, or a timeframe (Par. 0035 Li discloses providing the query vector to the first image search, which includes applying the model to the candidate images to generate first candidate vectors. Par. 0049 Li discloses the characteristics of the query object such as the type of object, type of place. The type of place is seen as a location, the type of object is seen as the entities); Li teaches generate, by the application of the computing device, a second feature vector for the revised query (Par. 0037 Li discloses conducting a second search based on the text strings. The first query is the query based upon the query image. The revised query is seen as the second query that contains text strings of the image); provide, by the application of the computing device, the second feature vector as input to a semantic search model that is trained to compare the second feature vector and the embedding for each image file in the corpus of image files to identify one or more preliminary image files of the corpus of image files (Fig. 1 (135) and Par. 0037 Li discloses using the query vector from the first search to select candidate images produced by the second search of text string. Par. 0038-0039 Li discloses using the computing model between the second candidate vector and the first search query vector. The computing model is seen as the system that compares the first search query vector to the second search of text strings); and receive, by the application of the computing device, the one or more preliminary image files as output from the semantic search model (Par. 0040 Li discloses the output as the images). Li does not teach but Shibata teaches produce, by the application of the computing device, a revised query from the query by removing the one or more locations and the timeframe from the query (Par. 0375 Shibata discloses removing the data and time information from the data that is used by the searching section. Par. 0414 Shibata discloses removing the location information from the data that is used by the searching section); Li and Shibata are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shibata, to allow for accessing content such as image information while listening to sound information (Par. 0011-0014 Shibata). As to claim 16 Li in combination with Shibata teaches each and every limitation of claim 15. In addition Li teaches wherein providing the first feature vector as input to the query understanding model further comprises operations to: classify, by the application of the computing device, the first feature vector as a plain language query or a semantic query (Par. 0033 Li discloses a first machine-learning model trained using a first set of training data and designed to identify a first type); and responsive to classifying the first feature vector as the semantic query, provide, by the application of the computing device, the first feature vector as input to the query understanding model (Par. 0033 Li discloses placing the query feature vector into the machine learning model to identify a first type of object). 6. Claim(s) 3-5, 10-12 and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. U.S. Patent Application Publication No. 2024/0202230 (herein as ‘Li’) and further in view of Shibata et al. U.S. Patent Application Publication No. 2012/0254168 (herein as ‘Shibata’) and Shmiel et al. U.S. Patent Application Publication No. 2017/0039267 (herein as ‘Shmiel’). As to claim 3 Li in combination with Shibata teaches each and every limitation of claim 1. Li in combination with Shibata does not teach but Shmiel teaches wherein producing the revised query comprises: comparing, by the application of the user device, each of the one or more entities to a list of unique identifiers to determine one or more matching entities, where a matching entity corresponds to a unique identifier in the list of unique identifiers (Par. 0064 Shmiel discloses matching the identifier with the query); and replacing, by the application of the user device, the one or more matching entities with one or more unique identifiers (Par. 0064 Shmiel discloses replacing the variable that is associated with a unique identifier. The variable is seen as the identifier); Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). Shibata teaches and removing the one or more locations and the timeframe from the query to produce the revised query (Par. 0375 Shibata discloses removing the data and time information from the data that is used by the searching section. Par. 0414 Shibata discloses removing the location information from the data that is used by the searching section). As to claim 4 Li in combination with Shibata teaches each and every limitation of claim 1. Li in combination with Shibata does not teach but Shmiel teaches comparing, by the application of the user device, the one or more locations and the timeframe from the query to the metadata for each of the one or more preliminary image files to identify one or more matching image files (Par. 0027 Shmiel discloses identifying a geographic location. Par. 0039 Shmiel discloses providing search results that include time portion). and presenting, by the application of the user device, at least one of the one or more matching image files on a display of the user device (Par. 0035, Par. 0047 Shmeil discloses providing the images to the user). Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). As to claim 5 Li in combination with Shibata and Shmiel teaches each and every limitation of claim 4. Li in combination with Shibata does not teach but Shmiel teaches wherein the one or more matching image files comprise the one or more preliminary image files with metadata that matches at least a location of the one or more locations or the timeframe (Par. 0027 Shmiel discloses identifying a geographic location. Par. 0039 Shmiel discloses providing search results that include time portion). Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). As to claim 10 Li in combination with Shibata teaches each and every limitation of claim 8. Li in combination with Shibata does not teach but Shmiel teaches wherein producing the revised query comprises operations to: compare, by the application of the computing device, each of the one or more entities to a list of unique identifiers to determine one or more matching entities, where a matching entity corresponds to a unique identifier in the list of unique identifiers (Par. 0064 Shmiel discloses matching the identifier with the query); and replace, by the application of the computing device, the one or more matching entities with one or more unique identifiers (Par. 0064 Shmiel discloses replacing the variable that is associated with a unique identifier. The variable is seen as the identifier); Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). Shibata teaches and removing the one or more locations and the timeframe from the query to produce the revised query (Par. 0375 Shibata discloses removing the data and time information from the data that is used by the searching section. Par. 0414 Shibata discloses removing the location information from the data that is used by the searching section). As to claim 11 Li in combination with Shibata teaches each and every limitation of claim 8. Li in combination with Shibata does not teach but Shmiel teaches wherein the operations further comprise operations to: Li in combination with Shibata does not teach but Shmiel teaches compare, by the application of the computing device, the one or more locations and the timeframe from the query to the metadata for each of the one or more preliminary image files to identify one or more matching image files (Par. 0064 Shmiel discloses matching the identifier with the query); and present, by the application of the computing device, at least one of the one or more matching image files on a display of the computing device (Par. 0035, Par. 0047 Shmeil discloses providing the images to the user). Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). As to claim 12 Li in combination with Shibata teaches each and every limitation of claim 11. Li in combination with Shibata does not teach but Shmiel teaches wherein the one or more matching image files comprise the one or more preliminary image files with metadata that matches at least a location of the one or more locations or the timeframe (Par. 0027 Shmiel discloses identifying a geographic location. Par. 0039 Shmiel discloses providing search results that include time portion). Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). As to claim 17 Li in combination with Shibata teaches each and every limitation of claim 15. Li in combination with Shibata does not teach but Shmiel teaches wherein producing the revised query comprises operations to: compare, by the application of the computing device, each of the one or more entities to a list of unique identifiers to determine one or more matching entities, where a matching entity corresponds to a unique identifier in the list of unique identifiers (Par. 0064 Shmiel discloses matching the identifier with the query); and replace, by the application of the computing device, the one or more matching entities with one or more unique identifiers (Par. 0064 Shmiel discloses replacing the variable that is associated with a unique identifier. The variable is seen as the identifier); Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). and removing the one or more locations and the timeframe from the query to produce the revised query (Par. 0375 Shibata discloses removing the data and time information from the data that is used by the searching section. Par. 0414 Shibata discloses removing the location information from the data that is used by the searching section). As to claim 18 Li in combination with Shibata teaches each and every limitation of claim 15. Li in combination with Shibata does not teach but Shmiel teaches wherein the operations further comprise operations to: compare, by the application of the computing device, the one or more locations and the timeframe from the query to the metadata for each of the one or more preliminary image files to identify one or more matching image files (Par. 0064 Shmiel discloses matching the identifier with the query); and present, by the application of the computing device, at least one of the one or more matching image files on a display of the computing device (Par. 0035, Par. 0047 Shmeil discloses providing the images to the user). Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). As to claim 19 Li in combination with Shibata and Shmiel teaches each and every limitation of claim 18. Li in combination with Shibata does not teach but Shmiel teaches wherein the one or more matching image files comprise the one or more preliminary image files with metadata that matches at least a location of the one or more locations or the timeframe (Par. 0027 Shmiel discloses identifying a geographic location. Par. 0039 Shmiel discloses providing search results that include time portion). Li and Shmiel are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the modified input of Shmiel, to provide the most responsive search result (Par. 0011-0014 Shmiel). 7. Claim(s) 7, 14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. U.S. Patent Application Publication No. 2024/0202230 (herein as ‘Li’) and further in view of Shibata et al. U.S. Patent Application Publication No. 2012/0254168 (herein as ‘Shibata’) and Boic U.S. Patent Application Publication No. 2020/0242336 (herein as ‘Boic’). As to claim 7 Li in combination with Shibata and Shmiel teaches each and every limitation of claim 5. Li in combination with Shibata does not teach but Boic teaches wherein comparing the timeframe to the metadata of a preliminary image file comprises: identifying a temporal distance comprising a difference between the timeframe and a metadata timeframe from the metadata of the preliminary image file, determining that the temporal distance exceeds a temporal threshold (Par. 0102 Boic discloses determining a match when the time threshold is exceeded the comparison is between a current time date and a previous time date); and classifying the preliminary image file as a matching image file in response to the temporal distance exceeding the temporal threshold (Par. 0102 Boic discloses replacing the signature image with a current signature when a match is determined and the time threshold is exceeded). Li and Boic are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the image processing of Boic, to access images in real time more efficiently (Par. 0002 Boic). As to claim 14 Li in combination with Shibata and Shmiel teaches each and every limitation of claim 12. Li in combination with Shibata does not teach but Boic teaches wherein comparing the timeframe to the metadata of a preliminary image file comprises operations to: identify a temporal distance comprising a difference between the timeframe and a metadata timeframe from the metadata, determine that the temporal distance exceeds a temporal threshold (Par. 0102 Boic discloses determining a match when the time threshold is exceeded the comparison is between a current time date and a previous time date); and classify the preliminary image file as a matching image file in response to the temporal distance exceeding the temporal threshold (Par. 0102 Boic discloses replacing the signature image with a current signature when a match is determined and the time threshold is exceeded). Li and Boic are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the image processing of Boic, to access images in real time more efficiently (Par. 0002 Boic). As to claim 20 Li in combination with Shibata and Shmiel teaches each and every limitation of claim 19. Li in combination with Shibata does not teach but Boic teaches wherein comparing the one or more locations to the metadata of a preliminary image file comprises operations to: identify one or more distances, each of the one or more distances comprising a distance between the one or more locations and a metadata location from the metadata; identify at least one distance of the one or more distances with a magnitude that is less than a distance threshold (Par. 0102 Boic discloses determining a match when the time threshold is exceeded the comparison is between a current time date and a previous time date); and classify the preliminary image file as a matching image file in response to the magnitude being less than the distance threshold (Par. 0102 Boic discloses replacing the signature image with a current signature when a match is determined and the time threshold is exceeded). Li and Boic are analogous art because they are in the same field of endeavor, data processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the feature vector of Li to include the image processing of Boic, to access images in real time more efficiently (Par. 0002 Boic). Allowable Subject Matter 8. Claims 6 and 13 do not contain any prior art rejections. Claims 6 and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 6. The method of claim 5, wherein comparing the one or more locations to the metadata of a preliminary image file comprises: identifying one or more distances, each of the one or more distances comprising a distance between the one or more locations and a metadata location from the metadata; identifying at least one distance of the one or more distances with a magnitude that is less than a distance threshold; and classifying the preliminary image file as a matching image file in response to the magnitude being less than the distance threshold. 13. The computing device of claim 12, wherein comparing the one or more locations to the metadata of a preliminary image file comprises operations to: identify one or more distances, each of the one or more distances comprising a distance between the one or more locations and a metadata location from the metadata;identify at least one distance of the one or more distances with a magnitude that is less than a distance threshold; and classify the preliminary image file as a matching image file in response to the magnitude being less than the distance threshold. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JERMAINE A MINCEY whose telephone number is (571)270-5010. The examiner can normally be reached 8am EST until 5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.A.M/ February 18, 2026Examiner, Art Unit 2159 /ALBERT M PHILLIPS, III/Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

May 12, 2025
Application Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591608
SYSTEM AND METHOD FOR PROVIDING PERSONALIZED EXPLAINABLE RESPONSE BY GENERATING MULTIMEDIA PROMPT USING CONTEXTUAL INFORMATION
2y 5m to grant Granted Mar 31, 2026
Patent 12566771
DYNAMICALLY SUPPRESSING QUERY ANSWERS IN SEARCH
2y 5m to grant Granted Mar 03, 2026
Patent 12554700
DISTRIBUTED STREAM-BASED ACID TRANSACTIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12505101
SHORTEST AND CHEAPEST PATHS IN DISTRIBUTED ASYNCHRONOUS GRAPH TRAVERSALS
2y 5m to grant Granted Dec 23, 2025
Patent 12499169
COMPUTER-IMPLEMENTED SYSTEM AND METHOD FOR PROVIDING WEBSITE NAVIGATION RECOMMENDATIONS
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
98%
With Interview (+41.9%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 492 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month