Prosecution Insights
Last updated: April 19, 2026
Application No. 18/280,283

System and Method for Tracking an Object Based on Skin Images

Non-Final OA §102§103
Filed
Sep 05, 2023
Examiner
BRUCE, FAROUK A
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
UNIVERSITY OF PITTSBURGH - OF THE COMMONWEALTH SYSTEM OF HIGHER EDUCATION
OA Round
3 (Non-Final)
46%
Grant Probability
Moderate
3-4
OA Rounds
4y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
93 granted / 200 resolved
-23.5% vs TC avg
Strong +37% interview lift
Without
With
+37.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
58 currently pending
Career history
258
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 200 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-8, 11, 14-17, 19, and 21-27 are pending. All claims are rejected. Response to Arguments Applicant's arguments in Applicant’s responses filed 08/20/2025 with respect to the rejection of claims 1, 22, and 27 under 35 U.S.C. 103 have been fully considered but they are NOT persuasive. Applicant remarks on pages 8-9 that Mihailescu (US 20190336004 A1) only mentions a “virtual model” 1615 of fig. 16, failing to teach the three-dimensional (3D) surface models of the subject, object (that is, the transducer), and an optical model of the camera. However, Examiner would like to note that the Mihailescu teaches a visualization module or a augmented reality device that allows the user to navigate the models, change visualization options, change system settings, and obtain supplementary information about the various components of the scene (paragraph 93). The models include models of the object (paragraph 15 discloses models of the tissue), the probe (paragraphs 100 and 106), and the camera (included in the whole scene in paragraphs 126 and 224). Put another way, the visualization system provides for a 3D model of the whole scene including all the elements within the scene according to paragraph 126. That is, as the user is viewing a particular surgery scene, the objects within the scene being viewed comprise reconstructed 3D virtualization or 3D models of the physical elements within the scene. Paragraph 106 further emphasizes this by stating that “One purpose of the fiducial object 310 is to help the computer vision system better determine the scale of the whole scene, to unambiguously position the probe in the scene, and to provide a landmark for 3-D modeling of the object outline”. Paragraph 126 further discloses a configuration where no triangulation of the elements within the scene are used to determine relative distance between the elements in the scene and build a 3D model of the whole scene and paragraph 129 indicates combination of such triangulation configuration and fiducial configuration. Therefore, Mihailescu teaches all the limitations of claim 1 and hence the claims stand rejected. Withdrawn Objections Pursuant of Applicant’s amendments filed 08/20/2025, the objection made to claim 27 has been withdrawn. Withdrawn Rejections Pursuant of Applicant’s amendments filed 08/20/2025, the rejections of claims 1-8, 11, 14-17, 19, and 21-27 under 35 U.S.C. 112(b) have been withdrawn. Pursuant of Applicant’s amendments filed 08/20/2025, the rejections of claims 1-8, 11, and 21-27 under 35 U.S.C. 101 have been withdrawn. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-8, 14-15, 19, 21, and 23-27 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mihailescu, et al. US 20190336004 A1. Regarding claim 1, Mihailescu teaches a system for determining a pose of an object relative to a subject with a skin or skin-like surface ([0010] states “An ultrasound transducer sharing a housing with a machine-vision camera system is disclosed. The integrated camera views an object, such as a patient's body, and determines the ultrasound transducer's x, y, z position in space and pitch, yaw, and roll orientation with respect to the object”), the system comprising: a camera not attached to the object and arranged to view the object and a surface of the subject ([0226] states “the at least one camera or ranging system can be part of a head-mounted tracking and visualization (HMTV) system”); a guide configured to guide an operator to move the object to a desired pose relative to the subject (in fig. 15, a graphical user interface provides a guide for an operator regarding the positions and orientations of the probe. [0213] states that “The window 1504 comprising the 3-D model of the patient can also comprise any of the elements described for window 1506, as well as 2-D ultrasound scans. The purpose of this window can be to guide the local clinician on the best position and orientation of the probe in respect the patient 1507. The best position and orientation of the probe is suggested either by the local analysis results, as indicated by a computer system, or as recommended by a remote user”); and a computing device (computing unit 104 of [0079]) in communication with the camera ([0079] discloses synchronization with the probe and camera), the computing device storing a three-dimensional (3D) surface model of the subject, a 3D surface model of the object, and an optical model of the camera, ([0229] discloses the determining a 3-D virtual model 1615 or stereoscopic view of the setup, which includes the models of the transducer, the surface of the patient using the camera, that is optical model), and the computing device configured to: determine 3D camera pose relative to the 3D surface model of the subject for which an image of the subject captured by the camera matches the 3D surface model of the subject projected through the optical model of the camera ([0102] states “Another advantage of a stereoscopic system, in particular, is that for the 3D modeler analysis step described below (in step 604 of FIG. 6), to be implemented on computer 305, the scale of the investigated scene will be apparent from matching the frames taken simultaneously from the multiple cameras, whose relative positions and orientations can be known with high precision. Also, in this arrangement no movement of the system is necessary to construct the 3D model of the investigated object”); and determine 3D object pose relative to the subject for which an image of the object matches the 3D surface model of the object projected through the optical model of the camera ([0213] states that “The best position and orientation of the probe is suggested either by the local analysis results, as indicated by a computer system, or as recommended by a remote user”, with further clarifications from [0214], stating that “the 3-D model ultrasound probe 1508, is shown positioned in respect to the 3-D model of the patient 1507, as obtained by the tracking system”). Regarding claim 2, Mihailescu further teaches wherein the camera is arranged in a smartphone or tablet ([0026]-[0027] discloses that the portable computing device is smart phone with camera). Regarding claim 3, Mihailescu further teaches wherein the object is at least one of the following: a surgical tool, an ultrasound probe, a clinician's hand or finger, or any combination thereof ([0214] discloses that the object is an ultrasound probe of which a model is generated). Regarding claim 4, Mihailescu further teaches wherein at least one of the 3D surface models of the subject and the 3D surface models of the object is derived from a set of images from a multi-camera system ([0081] and [0083] disclose two or more cameras for capturing the data that is processed to form the 3D models). Regarding claim 5, Mihailescu further teaches wherein at least one of the 3D surface models of the subject and the 3D surface models of the object is derived from a temporal sequence of camera images ([0107] indicates that “The data stream (or video stream) coming from the light sensing device (or camera) is analyzed to identify the fiducial object in the field of view. By analyzing the apparent form of the fiducial object, the position and orientation of the probe in respect to the fiducial object is obtained, and from that, the position and orientation of the probe in respect to the investigated object”). Regarding claim 6, Mihailescu further teaches wherein the optical model of the camera is derived from a calibration of the camera prior to a run-time operation of the system ([0114] states that “an image rectification 601 analysis step may be used to correct the position of the pixels in the frame using a pre-measured calibration matrix”). Regarding claim 7, Mihailescu further teaches wherein the optical model of the camera is derived during a run-time operation of the system ([0126] indicates “an estimate of the 3D position and orientation of the camera is obtained by tracking features and highlights associated with various objects in the field of view in subsequent image frames”). Regarding claim 8, Mihailescu further teaches an inertial navigation system incorporated into the object and configured to; output data associated with at least one of the 3D object pose and the 3D camera object pose ([0016] states that “An inertial measurement unit (IMU) can be supported by the housing, in which the memory includes instructions for execution by the at least one processor configured to determine the spatial position and orientation of the ultrasound transducer with respect to the object using output from the IMU”). Regarding claim 14, Mihailescu further teaches wherein the guide is configured to guide the operator based on a real-time determination of a present object pose ([0216] states “The patient contour as measured by the ranging or light sensing system can be matched to the outline of a generic human model. This will allow the computer guidance system to give precise instructions about the positioning and movement of the ultrasound probe in respect to the real patient model. Ultrasound anatomical landmarks observed in real-time can be matched in 3-D to landmarks in the 3-D models for a much more precise registration that will correct for organ movements and displacements due to variations in body habitus and position”). Regarding claim 15, Mihailescu further teaches wherein the guide identifies to the operator when a desired pose has been accomplished ([0213] states “The best position and orientation of the probe is suggested either by the local analysis results, as indicated by a computer system, or as recommended by a remote user”). Regarding claim 19, Mihailescu further teaches wherein the guide is displayed on a graphical display comprises a rendering of the object in the desired pose relative to the subject (see fig. 15 and [0214]). Regarding claim 21, Mihailescu further teaches wherein the object is a virtual object comprising a single target point on the surface of the subject ([0214] discloses that the “The recommended position of the probe is represented by the graphical guiding element 1509”). Regarding claim 23, Mihailescu teaches a method for determining a pose of an object relative to a subject (abstract describes methods for determining the position and orientation of sensor probe), comprising: capturing, with at least one computing device, a sequence of images with a stationary or movable camera unit arranged in a room([0014] states “a camera having a portion enclosed by the housing assembly and rigidly connected with the ultrasound transducer, and at least one processor operatively coupled with a memory and the camera, the memory having instructions for execution by the at least one processor configured to determine a spatial position and orientation of the ultrasound transducer with respect to an object using an image captured by the camera”), the sequence of images comprising the subject and an object moving relative to the subject ([0107] indicates that “The data stream (or video stream) coming from the light sensing device (or camera) is analyzed to identify the fiducial object in the field of view. By analyzing the apparent form of the fiducial object, the position and orientation of the probe in respect to the fiducial object is obtained, and from that, the position and orientation of the probe in respect to the investigated object”); and determining, with at least one computing device, the pose of the object with respect to the subject in at least one image of the sequence of images based on computing or using a prior surface model of the subject ([0102] states “Another advantage of a stereoscopic system, in particular, is that for the 3D modeler analysis step described below (in step 604 of FIG. 6), to be implemented on computer 305, the scale of the investigated scene will be apparent from matching the frames taken simultaneously from the multiple cameras, whose relative positions and orientations can be known with high precision. Also, in this arrangement no movement of the system is necessary to construct the 3D model of the investigated object”), a surface model of the object, and an optical model of the stationary or movable camera unit ([0213] states that “The best position and orientation of the probe is suggested either by the local analysis results, as indicated by a computer system, or as recommended by a remote user”, with further clarifications from [0214], stating that “the 3-D model ultrasound probe 1508, is shown positioned in respect to the 3-D model of the patient 1507, as obtained by the tracking system”); and a guiding, with a guide and at least one computing device, an operator to move the object to move the object to a desired pose relative to the subject (in fig. 15, a graphical user interface provides a guide for an operator regarding the positions and orientations of the probe. [0213] states that “The window 1504 comprising the 3-D model of the patient can also comprise any of the elements described for window 1506, as well as 2-D ultrasound scans. The purpose of this window can be to guide the local clinician on the best position and orientation of the probe in respect the patient 1507. The best position and orientation of the probe is suggested either by the local analysis results, as indicated by a computer system, or as recommended by a remote user”). Regarding claim 24, Mihailescu further teaches wherein the at least one computing device and the stationary or movable camera unit are arranged in a mobile device ([0026]-[0027] disclose a smart phone with camera). Regarding claim 26, Mihailescu further teaches wherein determining the pose of the object comprises: generating a projection of the surface model of the subject through the optical model of the stationary or movable camera unit; and matching at least one image to the projection ([0213] states that “The best position and orientation of the probe is suggested either by the local analysis results, as indicated by a computer system, or as recommended by a remote user”, with further clarifications from [0214], stating that “the 3-D model ultrasound probe 1508, is shown positioned in respect to the 3-D model of the patient 1507, as obtained by the tracking system”). Regarding claim 27, Mihailescu teaches a system for determining a pose of an object relative to a subject (see abstract), comprising: a camera unit ([0226] states “the at least one camera or ranging system can be part of a head-mounted tracking and visualization (HMTV) system”); a data storage device configured to store a surface model of a subject, a surface model of an object, and an optical model of the camera unit (storage device 1314 of [0197] and [0199]); a guide configured to guide an operator to move the object to a desired pose relative to the subject (in fig. 15, a graphical user interface provides a guide for an operator regarding the positions and orientations of the probe. [0213] states that “The window 1504 comprising the 3-D model of the patient can also comprise any of the elements described for window 1506, as well as 2-D ultrasound scans. The purpose of this window can be to guide the local clinician on the best position and orientation of the probe in respect the patient 1507. The best position and orientation of the probe is suggested either by the local analysis results, as indicated by a computer system, or as recommended by a remote user”); and at least one computing device (processor of [0014]) programmed or configured to: capture a sequence of images with the camera unit while the camera unit is stationary and arranged in a room ([0014] states “a camera having a portion enclosed by the housing assembly and rigidly connected with the ultrasound transducer, and at least one processor operatively coupled with a memory and the camera, the memory having instructions for execution by the at least one processor configured to determine a spatial position and orientation of the ultrasound transducer with respect to an object using an image captured by the camera”), the sequence of images comprising the subject and the object moving relative to the subject ([0107] indicates that “The data stream (or video stream) coming from the light sensing device (or camera) is analyzed to identify the fiducial object in the field of view. By analyzing the apparent form of the fiducial object, the position and orientation of the probe in respect to the fiducial object is obtained, and from that, the position and orientation of the probe in respect to the investigated object”. This is performed using the HMTV system); and determine the pose of the object with respect to the subject in at least one image of the sequence of images based on a surface model of the subject([0102] states “Another advantage of a stereoscopic system, in particular, is that for the 3D modeler analysis step described below (in step 604 of FIG. 6), to be implemented on computer 305, the scale of the investigated scene will be apparent from matching the frames taken simultaneously from the multiple cameras, whose relative positions and orientations can be known with high precision. Also, in this arrangement no movement of the system is necessary to construct the 3D model of the investigated object”), a surface model of the object, and an optical model of the camera unit ([0213] states that “The best position and orientation of the probe is suggested either by the local analysis results, as indicated by a computer system, or as recommended by a remote user”, with further clarifications from [0214], stating that “the 3-D model ultrasound probe 1508, is shown positioned in respect to the 3-D model of the patient 1507, as obtained by the tracking system”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Mihailescu in view of Usami, et al., US 20210258710 A1 (disclosed in IDS filed 03/26/2024). Regarding claim 11, Mihailescu teaches all the limitations of claim 1. Mihailescu fails to teach wherein determining at least one of the 3D camera object pose and the 3D object pose (as indicated above in the rejection of claim 1) is based on an inverse rendering of at least one of the 3D surface model of the subject and the 3D surface model of the object. However, within the same field of endeavor, Usami teaches a captured image obtaining unit 50 of an output control device 10 obtains a captured image such as a polarization image from an imaging device 12. A space information obtaining unit 54 obtains a normal line and a position of a surface of an actual object in a space and a sound absorption coefficient at the surface (abstract), [0066] stating that “the sound absorption coefficient obtaining unit 68 may identify the material by solving an inverse problem of a rendering equation that is typically used in computer graphics drawing”. [0067]-[0070] describe the use of inverse rendering a surface of an object. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Mihailescu teach wherein determining at least one of the 3D camera object pose and the 3D object pose (as indicated above in the rejection of claim 1) is based on an inverse rendering of at least one of the 3D surface model of the subject and the 3D surface model of the object, as taught by Usami, providing an accurate method of precisely determining the positions and postures of objects ([0101]-[0102]). Claims 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mihailescu in view of Weide, et al., JP 2018538015 A (disclosed in IDS filed 03/26/2024). Regarding claim 16, Mihailescu teaches all the limitations of claim 12. Mihailescu fails to teach lights attached to the object, wherein the operator is guided to move the object by selective activation of the lights. However, Weide discloses lights attached to the object, wherein the operator is guided to move the object by selective activation of the lights (the guide is a series of lights where a surgeon keeps the light in the center of an X of the continuous light (guided to move); page 1). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the Mihailescu to include discloses lights attached to the object, wherein the operator is guided to move the object by selective activation of the lights, as taught by Weide, for the benefit of training the physician or guiding the physician to perform a virtual procedure (Weide; page 2). Regarding claim 17, Mihailescu teaches all the limitations of claim 12. Mihailescu fails to teach wherein the guide is configured to guide the operator based on at least one of audio cues and tactile cues. However, Weide further discloses wherein the guide is configured to guide the operator based on audio cues (audio feedback is used to guide the physician; page 2). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the Mihailescu wherein the guide is configured to guide the operator based on audio cues, as taught by Weide, for the benefit of training the physician or guiding the physician to perform a virtual procedure (Weide; page 2). Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Mihailescu in view of Iwano, et al., US 6487431 B1 (disclosed in IDS filed 03/26/2024). Regarding claim 22, Mihailescu teaches all the limitations of claim 12. Mihailescu fails to teach wherein the object is a virtual object comprising a one-dimensional line intersecting the surface of the subject at a single target point in a particular direction relative to the surface. However, Iwano discloses A radiographic apparatus which allows a biopsy needle or a drug injection needle to be run into a patient accurately and quickly (abstract) wherein the object is a virtual object comprising a one-dimensional line intersecting the surface of the subject at a single target point in a particular direction relative to the surface (fig. 9, col. 7, lines 49-col. 8, line 3). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Mihailescu, wherein the object is a virtual object comprising a one-dimensional line intersecting the surface of the subject at a single target point in a particular direction relative to the surface, as taught by Iwano, for guiding an operator accurately and quickly (col. 7, lines 58-62). Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Mihailescu in view of Bickel, et al., US 20120185218 A1. Regarding claim 25, Mihailescu teaches all the limitations of claim 23. Mihailescu fails to teach wherein determining the pose of the object includes determining a skin deformation of the subject. However, Bickel teaches a computer-implemented method is provided for physical face cloning to generate a synthetic skin. Rather than attempt to reproduce the mechanical properties of biological tissue, an output-oriented approach is utilized that models the synthetic skin as an elastic material with isotropic and homogeneous properties (e.g., silicone rubber). The method includes capturing a plurality of expressive poses from a human subject and generating a computational model based on one or more material parameters of a material (see abstract). [0075] states that “the optimization process may be utilized to modify a local thickness of the synthetic skin geometry in such a way that when mechanical actuators of the animatronic device are set to values corresponding to a particular expressive pose, the resulting deformation of the skin matches the expressive poses' target positions q as closely as possible. In a physical simulation, the actuators settings result in hard positional constraints that can be directly be applied to the corresponding deformed positions.”. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Mihailescu, wherein determining the pose of the object includes determining a skin deformation of the subject, providing an accurate prediction of the deformed shape ([0026]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Farouk A Bruce whose telephone number is (408)918-7603. The examiner can normally be reached Mon-Fri 8-5pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached at (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FAROUK A BRUCE/ Examiner, Art Unit 3797 /CHRISTOPHER KOHARSKI/ Supervisory Patent Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Sep 05, 2023
Application Filed
Sep 05, 2023
Response after Non-Final Action
May 10, 2025
Non-Final Rejection — §102, §103
Aug 20, 2025
Response Filed
Sep 11, 2025
Final Rejection — §102, §103
Feb 17, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589199
APPARATUS FOR INCREASED DYE FLOW
2y 5m to grant Granted Mar 31, 2026
Patent 12569227
ULTRASOUND BEAMFORMER-BASED CHANNEL DATA COMPRESSION
2y 5m to grant Granted Mar 10, 2026
Patent 12558030
Device for Detecting and Illuminating the Vasculature Using an FPGA
2y 5m to grant Granted Feb 24, 2026
Patent 12551173
SYSTEM AND METHOD FOR SINGLE-SCAN REST-STRESS CARDIAC PET
2y 5m to grant Granted Feb 17, 2026
Patent 12521053
Methods and Devices for Electromagnetic Measurements from Ear Cavity
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
46%
Grant Probability
84%
With Interview (+37.2%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 200 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month