Prosecution Insights
Last updated: April 19, 2026
Application No. 18/777,315

DOCUMENT SEARCH FOR DOCUMENT RETRIEVAL USING 3D MODEL

Non-Final OA §101§DP
Filed
Jul 18, 2024
Examiner
TSENG, CHARLES
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Georgetown University
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
541 granted / 686 resolved
+16.9% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
20 currently pending
Career history
706
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 686 resolved cases

Office Action

§101 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 3, 6, 8, 10, 11, 13, 16, 18 and 20 are objected to because of the following informalities: For claim 1, Examiner believes this claim should be amended in the following manner: A method, performed by one or more computing devices, the method comprising: generating a three-dimensional (3D) model for an object based on one or more epipolar views of the object; generating one or more feature vector representations of the 3D model; identifying one or more documents having one or more feature vectors that most closely match the one or more feature vector representations generated for the 3D model; wherein the generating the 3D model comprises: selecting a point on a first view of the one or more epipolar views; projecting a vector from the selected point into a model space; projecting the vector onto at least one view of the one or more epipolar views which [[are]] is not the first view; and determining one or more candidate points on the vector for the 3D model. For claim 3, Examiner believes this claim should be amended in the following manner: The method of claim 2, wherein the point cloud representation comprises a plurality of points generated by iteratively sampling the one or more epipolar views. For claim 6, Examiner believes this claim should be amended in the following manner: The method of claim 1, wherein the selected point is based on a determined bias. For claim 8, Examiner believes this claim should be amended in the following manner: The method of claim 1, wherein the identifying is based on matching the one or more feature vector representations generated for the 3D model to a feature vector database comprising a plurality of feature vectors. For claim 10, Examiner believes this claim should be amended in the following manner: The method of claim 8, wherein the matching comprises comparing the one or more feature vector representations generated for the 3D model to the plurality of feature vectors of the feature vector database using at least one of cosine similarity or Euclidean distance. For claim 11, Examiner believes this claim should be amended in the following manner: A computer-readable medium storing instructions for executing a method via one or more processors, the method comprising: generating a three-dimensional (3D) model for an object based on one or more epipolar views of the object; generating one or more feature vector representations of the 3D model; identifying one or more documents having one or more feature vectors that most closely match the one or more feature vector representations generated for the 3D model; wherein the generating the 3D model comprises: selecting a point on a first view of the one or more epipolar views; projecting a vector from the selected point into a model space; projecting the vector onto at least one view of the one or more epipolar views which [[are]] is not the first view; and determining one or more candidate points on the vector for the 3D model. For claim 13, Examiner believes this claim should be amended in the following manner: The computer-readable medium of claim 12, wherein the point cloud representation comprises a plurality of points that are iteratively sampled from the one or more epipolar views. For claim 16, Examiner believes this claim should be amended in the following manner: The computer-readable medium of claim 11, wherein the selected point is based on a determined bias. For claim 18, Examiner believes this claim should be amended in the following manner: The computer-readable medium of claim 11, wherein the identifying is based on matching the one or more feature vector representations generated for the 3D model to a feature vector database comprising a plurality of feature vectors. For claim 20, Examiner believes this claim should be amended in the following manner: The computer-readable medium of claim 18, wherein the matching comprises comparing the one or more feature vector representations generated for the 3D model to the plurality of feature vectors of the feature vector database using at least one of cosine similarity or Euclidean distance. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 11-20 are rejected under 35 U.S.C. 101 because they encompass nonstatutory subject matter. For claims 11-20, these claims are directed to a “computer-readable medium”. Applicants’ Specification does not specifically define a “computer-readable medium” and does not describe what forms a “computer-readable medium” may take. As Applicants’ Specification describes “communication media” as media for conveying computer-execution instructions in signals (Specification at par. 70), Examiner finds “computer-readable medium” may be broadly interpreted to cover communication media such as signals and other ineligible subject matter. Therefore, claims 11-20 are rejected under 35 U.S.C. 101 for encompassing nonstatutory subject matter. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1, 2, 4-7, 11, 12 and 14-17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5 and 7-11 of U.S. Patent No. 12,073,646. The following is a claim comparison of claims 1, 2, 4-7, 11, 12 and 14-17 of the instant application and claims 1-5 and 7-11 of U.S. Patent No. 12,073,646. Application No. 18/777,315 U.S. Patent No. 12,073,646 1. A method, performed by one or more computing devices, the method comprising: generating a three-dimensional (3D) model for an object based on one or more epipolar views of the object; generating one or more feature vector representations of the 3D model; identifying one or more documents having one or more feature vectors that most closely match the one or more feature vectors generated for the 3D model; wherein the generating the 3D model comprises: selecting a point on a first view of the one or more epipolar views; projecting a vector from the selected point into a model space; projecting the vector onto the other one or more epipolar views which are not the first view; and determining one or more candidate points on the vector for the 3D model. 1. A method, performed by one or more computing devices, the method comprising: obtaining one or more images of an object; generating a three-dimensional (3D) model for the object based on the one or more images of the object; generating one or more feature vector representations of the object; matching the one or more feature vector representations to one or more other feature vectors; and based on the matching, identifying one or more documents having one or more feature vectors that most closely match the one or more feature vector representations generated for the object, wherein the 3D model is represented by a point cloud, wherein the one or more images of the object include one or more epipolar views, and wherein the generating the 3D model represented by the point cloud for the object comprises: selecting a point on a first view of the one or more epipolar views; projecting a vector from the selected point into a model space; projecting the vector onto at least one view of the one or more epipolar views which is not the first view; and determining candidate points on the vector for the point cloud. 2 1 4 2 5 3 6 4 7 5 11. A computer-readable medium storing instructions for executing a method via one or more processors, the method comprising: generating a three-dimensional (3D) model for an object based on one or more epipolar views of the object; generating one or more feature vector representations of the 3D model; identifying one or more documents having one or more feature vectors that most closely match the one or more feature vectors generated for the 3D model; wherein the generating the 3D model comprises: selecting a point on a first view of the one or more epipolar views; projecting a vector from the selected point into a model space; projecting the vector onto the other one or more epipolar views which are not the first view; and determining one or more candidate points on the vector for the 3D model. 7. A non-transitory computer-readable medium storing instructions for executing a method, the method comprising: obtaining one or more images of an object; generating a three-dimensional (3D) model for the object based on the one or more images of the object; generating one or more feature vector representations of the object; matching the one or more feature vector representations to one or more other feature vectors; and based on the matching, identifying one or more documents having one or more feature vectors that most closely match the one or more feature vector representations generated for the object, wherein the 3D model is represented by a point cloud, wherein the one or more images of the object include one or more epipolar views, and wherein the generating the 3D model represented by the point cloud for the object comprises: selecting a point on a first view of the one or more epipolar views; projecting a vector from the selected point into a model space; projecting the vector onto at least one view of the one or more epipolar views which is not the first view; and determining candidate points on the vector for the point cloud. 12 7 14 8 15 9 16 10 17 11 Claims 1, 2, 4-7, 11, 12 and 14-17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-5 and 7-11 of U.S. Patent No. 12,073,646. For independent claim 1, claim 1 of U.S. Patent No. 12,073,646 anticipates and discloses the limitations of claim 1 of the instant application as shown in the claim chart above. Thus, claim 1 of the instant application is not patentably distinct from claim 1 of U.S. Patent No. 12,073,646. For dependent claims 2 and 4-7, claims 1-5 of U.S. Patent No. 12,073,646 mirror and recite the limitations of claims 2 and 4-7 as set forth in the claim chart above. Thus, claims 2 and 4-7 of the instant application are not patentably distinct from claims 1-5 of U.S. Patent No. 12,073,646. For independent claim 11, claim 7 of U.S. Patent No. 12,073,646 anticipates and discloses the limitations of claim 11 of the instant application as shown in the claim chart above. Thus, claim 11 of the instant application is not patentably distinct from claim 7 of U.S. Patent No. 12,073,646. For dependent claims 12 and 14-17, claims 7-11 of U.S. Patent No. 12,073,646 mirror and recite the limitations of claims 12 and 14-17 as set forth in the claim chart above. Thus, claims 12 and 14-17 of the instant application are not patentably distinct from claims 7-11 of U.S. Patent No. 12,073,646. Allowable Subject Matter Claims 1-10 would be allowable if rewritten to address the claim objections discussed above and upon submission of a suitable terminal disclaimer. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TSENG whose telephone number is (571)270-3857. The examiner can normally be reached 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES TSENG/ Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jul 18, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §101, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594021
EDITING METHOD OF DYNAMIC SPECTRUM PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12591405
SHARED CONTROL OF A VIRTUAL OBJECT BY MULTIPLE DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12579760
DIGITAL CONTENT PLATFORM INCLUDING METHODS AND SYSTEM FOR RECORDING AND STORING DIGITAL CONTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572015
TRANSPARENT OPTICAL MODULE USING PIXEL PATCHES AND ASSOCIATED LENSLETS
2y 5m to grant Granted Mar 10, 2026
Patent 12566503
REPRESENTATION FORMAT FOR HAPTIC OBJECT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+32.1%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 686 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month