Prosecution Insights
Last updated: April 19, 2026
Application No. 18/695,722

Method, Apparatus and Device for Photogrammetry, and Storage Medium

Non-Final OA §103
Filed
Mar 26, 2024
Examiner
BILODEAU, DUSTIN E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Tianyuan 3D (Tianjin) Technology Co. Ltd.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
71 granted / 81 resolved
+25.7% vs TC avg
Moderate +5% lift
Without
With
+5.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
111
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
75.7%
+35.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
2.8%
-37.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 81 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application claims benefit of foreign priority under 35 U.S.C. 119(a)-(d) of CN202111133068.3, filed in China on 9/27/2021. Preliminary Amendment Applicant submitted a preliminary amendment on 3/26/2024. The Examiner acknowledges the amendment and has reviewed the claims accordingly. Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/26/2024, 3/14/2025, 7/14/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered and attached by the examiner. Claim Objections Claim 18 is objected to because of the following informalities: Claim 18 should depend upon claim 17. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an obtaining component, configured to in claim 13. a processing component, configured to in claim 13. a constructing component, configured to in claim 13. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4, 13, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Armstrong (U.S. Patent Pub. No. 2020/0049489) in view of Chui (U.S. Patent No. 10791319). Regarding Claim 1, Armstrong teaches a method for photogrammetry, comprising (¶68 FIG. 48 is a perspective view of a master part having a collection of fiducial targets being imaged by a photogrammetry camera held in a plurality of positions:) obtaining a plurality of groups of synchronous images of an object to be measured continuously photographed by a multi-view camera, wherein each group of synchronous images comprises a plurality of images photographed by a plurality of cameras in the multi-view camera at a same moment (¶79 FIG. 1 is a perspective view of a 3D imager 10 according to an embodiment. It includes a frame 20, a projector 30, a first camera assembly 60, and a second camera assembly 70; ¶201 the mover 3410 may move the plurality of 3D imagers 3424 to a new position and orientation in space. A method is used to properly register the data received from the 3D measurements made at each of the multiple positions and orientations of the mover 3410,) and a plurality of mark points are arranged on a surface of the object to be measured (Fig. 49; ¶69 FIG. 49 is a perspective view of an right edge of the master part showing a hinge assembly and a collection of fiducial targets;) extracting coordinates of image points corresponding to the mark points in the synchronous images for each group of synchronous images (Fig. 12A&B; ¶99 The object point might be, for example, one of the points V.sub.A, V.sub.B, V.sub.C, or V.sub.D. These four object points correspond to the points W.sub.A, W.sub.B, W.sub.C, W.sub.D, respectively, on the reference plane 1210 of device 2,) and reconstructing first three-dimensional coordinates of the mark points corresponding to the image points according to calibration data of the multi-view camera and the coordinates of the image points to obtain a plurality of groups of three-dimensional mark points; and (¶182 each processor 3430 of a 3D imager 3424 is configured to determine a point cloud if 3D coordinates of the object surface and to send this information to a system controller 3431; ¶187 This calibration procedure enables determination of mutual registration information for each of the 3D imagers 3424 on the mounting frame 3422.) obtaining a mark point global framework corresponding to the mark points on the surface of the object to be measured based on the plurality of groups of three-dimensional mark points (¶241 the 3D coordinates of object points measured by the collection of imagers are transformed into a common frame of reference, which may also be referred to as a global frame of reference.) Armstrong implies but does not explicitly disclose extracting coordinates of image points corresponding to the mark points in the synchronous images for each group of synchronous images, and reconstructing first three-dimensional coordinates of the mark points corresponding to the image points according to calibration data of the multi-view camera and the coordinates of the image points to obtain a plurality of groups of three-dimensional mark points. Chui is in the same field of art of image analysis. Further, Chui teaches extracting coordinates of image points corresponding to the mark points in the synchronous images for each group of synchronous images (Col 3 Lines 42-20: At step 206, a common set of points in the scene that are likely to have been captured by most, if not all, of the plurality of cameras is identified. For example, video frames associated with a prescribed time slice captured by the plurality of cameras may be manually and/or automatically searched at step 206 to identify commonly captured areas of the scene and the common set of points. In some embodiments, the common set of points comprises distinct features and/or fiducials,) and reconstructing first three-dimensional coordinates of the mark points corresponding to the image points according to calibration data of the multi-view camera and the coordinates of the image points to obtain a plurality of groups of three-dimensional mark points (Col 4 Lines 9-20: At step 212, images or frames captured by the cameras as well as the determined relative poses of the cameras are processed to derive a three-dimensional reconstruction of the scene in the common field of view of the cameras. For example, step 212 may comprise calculating correspondence between images comprising a frame (i.e., images associated with a particular time slice) that have been captured by the plurality of cameras. Once correspondence is calculated, the images corresponding to a frame or time slice can be rectified, and depth information for tracked features can be computed. A sparse point cloud can then be used to guide calculation of a dense surface in three-dimensional space.) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Armstrong by extracting image points and reconstructing 3D coordinates that is taught by Chui; thus, one of ordinary skilled in the art would be motivated to combine the references to eliminate the need for camera or relative pose information (Chui Background). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 2, Armstrong in view of Chui discloses the method as claimed in claim 1, wherein the obtaining the mark point global framework corresponding to the mark points on the surface of the object to be measured based on the plurality of groups of three-dimensional mark points comprises: performing tracking and stitching processing and inter-group deduplication processing on the plurality of groups of three-dimensional mark points to obtain the mark point global framework corresponding to the mark points on the surface of the object to be measured (Chui, Col 4 Lines 16-20: Once correspondence is calculated, the images corresponding to a frame or time slice can be rectified, and depth information for tracked features can be computed. A sparse point cloud can then be used to guide calculation of a dense surface in three-dimensional space.) Regarding Claim 4, Armstrong in view of Chui discloses the method as claimed in claim 1, wherein the extracting coordinates of image points corresponding to the mark points in the synchronous images comprises: performing edge extraction processing on the synchronous images to obtain the image points in the synchronous images; and determining the coordinates of the image points in a coordinate system of the synchronous images based on positions of the image points in the synchronous images (Armstrong, ¶171 a solution to this issue uses the sharp edges that appear in one or more 2D images of the feature being measured. In many cases, edge features can be clearly seen in 2D images—for example, based on textural shadings. These sharp edges may be determined in coordination with surface coordinates determined using the triangulation methods. By intersecting the projected rays that pass through the perspective center of the lens in the triangulation scanner with the 3D coordinates of the portion of the surface determined to relatively high accuracy by triangulation methods, the 3D coordinates of the edge features may be accurately determined.) Regarding Claim 15, Armstrong in view of Chui discloses the non-transitory computer-readable storage medium, wherein the storage medium stores a computer program, and the computer program, when executed by a processor, implements the method as claimed in claim 1 (Chui, Claim 16: A computer program product embodied in a non-transitory computer readable storage medium and comprising computer instructions.) Regarding Claim 16, Armstrong in view of Chui discloses the method as claimed in claim 1, wherein reconstructing first three-dimensional coordinates of the mark points corresponding to the image points according to calibration data of the multi-view camera and the coordinates of the image points comprises: Reconstructing the first three-dimensional coordinates based on a polar line matching method by the coordinates of the image points in the synchronous images and the pre-obtained calibration data of the multi-view camera (Armstrong, ¶104 To check the consistency of the projection point P.sub.3, intersect the plane P.sub.2-E.sub.23-E.sub.32 with the reference plane 1280 to obtain the epipolar line 1284. Intersect the plane P.sub.1-E.sub.13-E.sub.31 to obtain the epipolar line 1282. If the projection point P.sub.3 has been determined consistently, the projection point P.sub.3 will lie on the intersection of the determined epipolar lines 1282 and 1284.) Regarding claim 13, claim 13 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance with Armstrong further teaching on: An apparatus for photogrammetry, comprising: an obtaining component, a processing component, and a constructing component (¶91 FIG. 8 is a block diagram of a computing system that includes the internal electrical system 700, one or more computing elements 810, 820, and a network of computing elements 830, commonly referred to as the cloud... multiple external processors, especially processors on the cloud, may be used to process scanned data in parallel, thereby providing faster results, especially where relatively time-consuming registration and filtering may be required.) Allowable Subject Matter Claims 3, 5-12, and 17-21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 3, no prior art teaches wherein the performing tracking and stitching processing and inter-group deduplication processing on the plurality of groups of three-dimensional mark points to obtain the mark point global framework corresponding to the mark points on the surface of the object to be measured comprises: performing tracking and stitching processing on the plurality of groups of three-dimensional mark points to obtain a mark point original framework corresponding to the mark points on the surface of the object to be measured and numbers of the three-dimensional mark points in each group in the original framework; and performing inter-group deduplication processing on the numbers of the three-dimensional mark points in each group in the original framework to obtain the global framework and unique numbers of the three-dimensional mark points in each group in the global framework. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUSTIN BILODEAU whose telephone number is (571)272-1032. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUSTIN BILODEAU/Examiner, Art Unit 2664 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602802
ELECTRONIC DEVICE FOR GENERATING DEPTH MAP AND OPERATING METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12597293
System and Method for Authoring Human-Involved Context-Aware Applications
2y 5m to grant Granted Apr 07, 2026
Patent 12592084
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR IDENTIFYING STATE OF LIGHTING
2y 5m to grant Granted Mar 31, 2026
Patent 12591959
METHOD, APPARATUS, AND DEVICE FOR PROCESSING IMAGE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12581041
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION COLLECTION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+5.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 81 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month