Prosecution Insights
Last updated: April 19, 2026
Application No. 18/528,475

METHOD AND SYSTEM FOR ESTIMATING A 3D CAMERA POSE BASED ON 2D MASK AND RIDGES AND APPLICATION IN A LAPAROSCOPIC PROCEDURE

Non-Final OA §101§102§103
Filed
Dec 04, 2023
Examiner
BURKE, TIONNA M
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Edda Technology Inc.
OA Round
1 (Non-Final)
54%
Grant Probability
Moderate
1-2
OA Rounds
4y 9m
To Grant
73%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
233 granted / 431 resolved
-0.9% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
46 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 431 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 5/5/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 9-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the specification states that the recited “machine-readable medium” includes signals. See Specification paragraph [0071]. Machine-readable medium language fails to disclaim transitory media. The broadest reasonable interpretation of a claim drawn to a storage medium typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable storage medium. See Ex parte Mehwerter, App. No. 2012-007692 (PTAB 2013)(precedential). Claims 9-16 fail to recite statutory subject matter. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 5, 6, 7, 9, 10, 13-15, 17, 18 and 21-23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kluckner et al., United States Patent Publication 20180174311 (hereinafter “Kluckner”). Claim 1: Kluckner discloses: A method comprising: generating virtual 3D camera poses with respect to a 3D model previously constructed to model a 3D target organ and 3D anatomical structures associated therewith, wherein each of the virtual 3D camera poses corresponds to a perspective to view the 3D model (see paragraphs [0015] and [0054]). Kluckner teaches generating poses from a 3D model of an organ. The poses corresponding to view of the 3D model; creating virtual 2D images corresponding to the virtual 3D camera poses by projecting the 3D model in accordance with corresponding perspectives, wherein each of the virtual 2D images includes 2D projected target organ and/or 2D structures of some of the 3D anatomical structures visible from a corresponding perspective (see paragraph [0023]). Kluckner teaches creating 2D images corresponding to the 3D poses by projecting the 3D model and getting 2D images and structures; and obtaining 2D feature/camera pose mapping models based on the 2D features extracted from the virtual 2D images and the corresponding virtual 3D camera poses, wherein the 2D features include a 2D ridge line projected from a 3D ridge on the target organ represented in the 3D model (see paragraph [0027] and [0028]). Kluckner teaches obtaining the mappings of 3D to 2D and projections of features of the 3D onto the 2D. Claim 2: Kluckner discloses: wherein the 3D model models at least one of: the target organ, at least one blood vessel; at least one tumor; and one or more 3D ridges on the target organ (see paragraph [0015]). Kluckner teaches the 3D models a target organ. Claim 5: Kluckner discloses: wherein the step of obtaining 2D feature/camera pose mapping models comprises: pairing each of the virtual 3D camera poses with 2D features extracted from a corresponding virtual 2D image created by projecting the 3D model in accordance with a perspective determined based on the virtual 3D camera pose; and creating the 2D feature/camera pose mapping models based on the pairs of the 2D features and the virtual 3D camera poses (see paragraph [0028]-[0030] and [0033]). Kluckner teaches pairing the 3D image data with 2D features and creating a look up table and mappings based on the pairing. Claim 6: Kluckner discloses: wherein the 2D feature/camera pose mapping models correspond to a look-up table comprising the pairs of the 2D features and the virtual 3D camera poses so that given input 2D features extracted from a 2D image, at least one 3D camera pose is identified from a pair in the look-up table that has stored 2D features similar to the input 2D features (see paragraph [0033]). Kluckner teaches a look up table to look up the pairs of 2D features and 3D pose images. Claim 7: Kluckner discloses: wherein the step of creating the 2D feature/camera pose mapping tools comprises: generating training data based on the pairs of the 2D features and the virtual 3D camera poses (see paragraph [0011]). Kluckner teaches generating training data based on the mapping of 3D and 2D features; performing machine learning, using the training data, to learn the 2D feature/camera pose mapping tools (see paragraph [0013]). Kluckner teaches performing machine learning using training data to map 3D to 2D. Claim 9, 10, 13-15: Although Claims 9, 10, 13-15 are machine-readable medium claims, they are interpreted and rejected for the same reasons as the method of Claims 1, 2, 5-7, respectively. Claim 17, 18, 21-23: Although Claims 17, 18, 21-23 are system claims, they are interpreted and rejected for the same reasons as the method of Claims 1, 2, 5-7, respectively. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3, 4, 8, 11, 12, 16, 19, 20 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Kluckner, in view of Lurie et al., United States Patent Publication 20170046833 (hereinafter “Lurie”). Claim 3: Kluckner fails to express disclose six degrees of freedom. Lurie discloses: each of the virtual 3D camera poses is characterized in terms of six-degrees of freedom (see paragraph [0163] and [0167]). Lurie teaches the poses characterized in terms of six-degrees of freedom; and the virtual 3D camera poses are generated to cover different viewing angles with respect to the 3D model with an increment in each of the six-degrees of freedom according to a pre-determined resolution (see paragraph [0163] and [0167]). Lurie teaches poses are generated to cover every angle with respect to the model. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Kluckner to include the camera poses in terms of six-degrees of freedom for the purpose of efficiently covering different angles of the 3D model, as taught by Lurie. Claim 4: Kluckner fails to express disclose generating mask and using ridge lines for model reconstruction. Lurie discloses: wherein the 2D features extracted from each of the virtual 2D images include one or more of: a 2D structure corresponding to a 2D projection of the target organ in the virtual 2D image (see paragraphs [0056]). Lurie teaches the 2D images of the target image correspond to 2D structure; a mask of the 2D structure corresponding to the target organ (see paragraph [0010] and [0011]). Lurie teaches a mask of the 2D structure for the organ; a 2D ridge projected from a 3D ridge on the target organ modeled by the 3D model (see paragraph [0112] and [0113]). Lurie teaches the 2D ridge on the image is projected from the 3D ridge of the model. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Kluckner to include generating the masking and ridges of the 2D images for the purpose of efficiently processing 2D image data relating to 3D models, as taught by Lurie. Claim 8: Kluckner fails to express disclose generating 3D model reconstruction. Lurie discloses: receiving, during a medical procedure, a 2D image acquired by a camera inserted into a patient's body near the target object to capture surrounding information [0056]). Lurie teaches receiving 2D image based data taken with endoscope inside the body near the target organ; detecting, from the 2D image, a 2D object corresponding to the target organ and/or 2D structures corresponding to some of the 3D anatomical structures (see paragraph [0060]). Lurie teaches detecting a 2D object corresponding to the image based on structure from the 3D model; extracting 2D features of the detected 2D object and/or 2D structures (see paragraph [0060]). Lurie teaches extracting features from the detected object; predicting, based on the 2D feature/camera pose mapping models, an estimated 3D camera pose of the camera (see paragraph [0060]). Lurie teaches predicting a pose based on the camera pose; and projecting the 3D model to visualize the target organ and/or some of the anatomical structures associated therewith in accordance with a perspective determined based on the estimated 3D camera pose (see paragraph [0059] and [0064]). Lurie teaches projects the model to visual the target organ in a 3d space based on the pose and mapping. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Kluckner to include generating a 3D model reconstruction from 2D images for the purpose of efficiently reconstructing 3D models from 2D image data, as taught by Lurie. Claim 11, 12, 16: Although Claims 11, 12, 16 are machine-readable medium claims, they are interpreted and rejected for the same reasons as the method of Claims 3, 4, 8, respectively. Claim 19, 20, 24: Although Claims 19, 20, 24 are system claims, they are interpreted and rejected for the same reasons as the method of Claims 3, 4, 8, respectively. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIONNA M BURKE/Examiner, Art Unit 2178 3/20/26 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Mar 20, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596470
GESTURE-BASED MENULESS COMMAND INTERFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591731
SYSTEM AND METHOD FOR SELECTING RELEVANT CONTENT IN AN ENHANCED VIEW MODE
2y 5m to grant Granted Mar 31, 2026
Patent 12572698
INFRASTRUCTURE METHODS AND SYSTEMS FOR EXTENDING CUSTOMER RELATIONSHIP MANAGEMENT PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12564152
SYSTEM AND METHOD FOR MANAGEMENT OF SENSOR DATA BASED ON HIGH-VALUE DATA MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12547823
DYNAMICALLY AND SELECTIVELY UPDATED SPREADSHEETS BASED ON KNOWLEDGE MONITORING AND NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
54%
Grant Probability
73%
With Interview (+19.3%)
4y 9m
Median Time to Grant
Low
PTA Risk
Based on 431 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month