Prosecution Insights
Last updated: April 19, 2026
Application No. 17/973,982

SYSTEMS AND METHODS FOR AUTOMATIC CARDIAC IMAGE ANALYSIS

Final Rejection §103
Filed
Oct 26, 2022
Examiner
CADEAU, WEDNEL
Art Unit
2632
Tech Center
2600 — Communications
Assignee
Shanghai United Imaging Intelligence Co. Ltd.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
91%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
381 granted / 532 resolved
+9.6% vs TC avg
Strong +20% interview lift
Without
With
+19.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
574
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
75.6%
+35.6% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 532 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Prior arts cited in this office action: Mihalef et al. (US 20220328195 A1, hereinafter “Mihalef”) (Soble et al. (US 20230114066 A1, hereinafter “Soble”) Burlina et al. (US 20140071125 A1, hereinafter “Burlina”) Response to Arguments Applicant's arguments filed 06/10/2025 have been fully considered but they are not persuasive. Applicant’s Arguments/Remarks: But an inspection of those cited paragraphs reveals no teaching or suggestion about determining “a medical abnormality,” much less doing that based on a machine-learned pathology detection model pre-trained for determining a tissue pattern or a tissue parameter associated with the at least one of the multiple anatomical regions, and further determining whether the medical abnormality exists based on the determined tissue pattern or tissue parameter. Therefore, claim 1 should be allowable over the cited references at least for this reason. Examiner’s Response: examiner disagrees with applicant assertion above that the combination of the cited references does not teach or suggest applicant invention as claimed. Mihalef teaches the present embodiments describe a system and a method for heart strain determination (i.e., the determination of the engineering property called strain of the heart) (Mihalef [0002]). In other words, from the get go one can see that the system of Mihalef is geared towards determining when medical abnormality occurs in the heart. Mihalef further teaches According to a preferred system, the pose estimation module and/or the 3D deformation estimation module is a machine learning model, especially a deep learning model, particularly preferably a neuronal or neural network (Mihalef [0079]). Burlina teaches discloses herein are methods and systems to estimate tissue stress and strain and physiological parameters of the tissues (Burlina [0011]). Myocardial strain, which may be obtained from displacement vectors, may play a role in the diagnosis of cardiovascular diseases. An illustrative pathology related to strain-based diagnostics is hypertrophic cardiomyopathy (HCM). Accurate strain information may allow clinicians to more deeply understand the mechanisms behind HCM, and may provide a relatively fast and accurate screening technique for the disease (Mihalef [0240]). Therefore, one of ordinary skill in the art at the time of the effective filing date would be incline to used machine learning on the tissues of the heart to determine the heart strain which an indication of anomality such as disease. Applicant is reminded that the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). In this case using machine learning as a well-known technique and as used by Mihalef to determine the abnormality of the heart. And the strain can be determine based on tissue pattern or tissue parameters. Applicant’s Arguments/Remarks: amended claim 1 also requires that the apparatus be configured to register two or more medical images from the at least one group of medical images, wherein the registration is performed based on a machine-learned image registration model pre-trained to compensate for a motion associated with the two or more medical images during the registration. These requirements were previously part of claim 10, and the Office Action alleges that they were taught by Burlina in paragraphs [0274] and [0429]. Burlina, however, includes only one passing reference to “machine-learning” as in “machine learning-based filter,” see paragraph [0643] of Burlina, and the reference is made in the context of image delineation/segmentation instead of image registration. There is simply no teaching or suggestion whatsoever in Burlina about performing image registration based on a machine-learned image registration model, let alone a machine-learned model pretrained to compensate for a motion associated with the medical images being registered. As such, claim 1 should be allowable over the cited references also for this additional reason. Examiner’s Response: examiner disagrees with applicant assertion above that the combination of the cited prior arts does not teach or suggest machine-learned model pretrained to compensate for a motion associated with the medical image being registered. Mihalef teaches Regarding the pose estimation module, it is preferred that cardiac-centric coordinate system values, e.g., origin, angles, height, radius, and categorical tags, e.g., long axis, short axis, 2CH, 3CH, 4CH, apical, medial, basal, are used to train this module, that preferably has the architecture of a convolutional neural network (CNN); A system and a method are provided for determining a 3D strain of a heart by using a series of 2D-slice images of the heart. Although, these 2D-slice images should show the same 2D area at different times, they also could show a slightly different pose due to movements of the heart or the patient. the 2D-slice images are images from a cross section of the heart. The images form a series because the 2D-slice images have been recorded as a series one after another at different times of the cardiac cycle. It is preferred that they all show the same slicing-pose (position and orientation of a plane). However, by adequate training of the pose estimation module and the 3D deformation estimation module, there could be images from different poses (plane positions and orientations), especially from slight deviations of a main pose (due to heart movements or movements of a patient, e.g., due to breathing). The 2D-slice images should show different phases of the heart cycle (but not necessarily of the same heart cycle or necessarily the whole heart cycle) (Mihalef [0009]-[0016]). Burlina also teaches Further to the discussion above, unintended probe motion may overwhelm meaningful flow. To compensate, relative flow within the heart may be examined and/or or the myocardium may be registered between frame pairs to eliminate effects of probe motion and/or global heart movement (Burlina [0074]). As we can see Burlina proves that the series of 2D slice images of the same area at different time such that compensation of difference due to heart movement can be achieved (Mihalef [0009]-[0017, [0050]]) using a pose estimation neural network model. Therefore, under broadest reasonable interpretation, examiner maintains that the combination of the cited prior teaches or suggests applicant invention as claimed. Claim 12 contains similar limitation as claim 1 and is therefore not allowable for the same reason given above with regard to claim 1. Applicant’s Arguments/Remarks: applicant argues that None of the references, including paragraphs [0052] and [0140] of Mihalef and paragraphs [0026] and [0076] of Sobel cited by the Office Action, even remotely contemplate detecting one or more anatomical landmarks that indicate where a left ventricle of the heart intersects with a right ventricle of the heart, and using the detected landmarks to segment the heart into multiple myocardial segments. Examiner’s Response: examiner disagrees with applicant assertion above that the combination of the cited prior arts does not teach or suggest applicant invention as claimed. Mihalef teaches the triangulated surface is preferably parameterized based on a cardiac-centric reference system built from the chamber geometry. For instance, considering the left ventricle (LV), three orthogonal axes of the reference system can be defined as the long axis of the left ventricle (line connecting the LV apex to the barycenter of the mitral annulus), the axis connecting the barycenter of the mitral annulus with the barycenter of the tricuspid valve, and a third axis orthogonal to the previous two. According to a preferred embodiment, the training set is defined based on synthetically generated motion fields. In this case, one or more anatomical models representing one or more cardiac chambers are built based on statistical shape models or based on shape templates or using segmentation from images (Mihalef [0057], Soble teaches segmenting each image by using the processor that isolates one or more structures within each image based on the selected topic of the heading field, analyzing the one or more structures using the processor that extracts attribute information of the one or more isolated structures (claim 1); wherein the isolated one or more structures is selected from the group comprising: a cardiac chamber, a cardiac valve, myocardium, a cardiac septum, an artery, a vein, a cardiac border, a valve leaflet, a valve, a pituitary gland, and a lung mass (claim 8). The cardiac septum separates the right ventricle from the left ventricle. Therefore, detecting the cardiac septum allows for the segmentation of the heart wherein a segment of the right ventricle and a segment of the left ventricle is obtained separated based on the cardiac septum detected. Therefore, taking the teachings of Mihalef, Soble and Burlina as a whole, it would have been obvious to one of ordinary skill in the art to determine a landmark as to where the separation of the left ventricle and the right ventricle occurs in order to allow for the proper segmentation of the left ventricle and the right ventricle. Therefore, examiner maintains that the combination of the cited prior arts teaches all the limitation of the claims as amended. Claims, 1-7, 9, 11-17 and 19 are not allowable over the cited prior arts Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9, 11-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mihalef et al. (US 20220328195 A1, hereinafter “Mihalef”) in view of (Soble et al. (US 20230114066 A1, hereinafter “Soble”) and in view of Burlina et al. (US 20140071125 A1, hereinafter “Burlina”). Regarding claims 1 and 12: Mihalef teaches a system and method (Mihalef abstract [0002], [0076, fig. 1, where Mihalef discloses, method and computer programs), comprising: at least one processor (Mihalef [0015] where Mihalef teaches a suitable computing unit should have enough memory and computing power to host the modules. This means that the computing unit is able to process data with these modules in order to get results from input data. However, since there is a sequence of performing acts during the determination process, it is not necessary that all modules are hosted at the same time. Modules could also be present in a static memory and fetched (uploaded in a RAM) by the computing unit when needed. Such computing units, typically having a processor, a static memory, a random-access memory (RAM), a graphics processing unit (GPU), an input interface and an output interface are known in the art. The RAM could be used to host a module while working with it, store the results of a module and feed them to another module.) configured to: obtain a plurality of medical images associated with a heart (Mihalef [0009], where Soble teaches A system and a method are provided for determining a 3D strain of a heart by using a series of 2D-slice images of the heart. Although, these 2D-slice images should show the same 2D area at different times, they also could show a slightly different pose due to movements of the heart or the patient); classify, based on a machine-learned image classification model, the plurality of medical images into multiple groups, wherein the multiple groups include at least a first group comprising one or more short-axis images of the heart and a second group comprising one or more long-axis images of the heart (Mihalef [0017], [0053]-[0055], [0072], where Mihalef teaches the pose estimation module estimates the slicing-pose attributes of a series of 2D-slice images, preferably using cardiac-centric coordinate system values (e.g., origin, angles, height, radius) and categorical tags (e.g., long axis, short axis, 2CH, 3CH, 4CH, apical, medial, basal). However, also arbitrary coordinate systems are possible. As said above, the pose is the position and the orientation of the 2D-slice in the heart. This could e.g., be realized with a spatial vector for the position and vectors spanning a plane for the orientation. However, the pose could also be provided by using categorical information (e.g., a label: “2 Chamber view”) preferably combined with one or more values related to goodness-of-fitting the label. The pose could e.g., be given as foreshortening scores based on some 3D geometric measures, which evaluate for example how “far” is the slice from “the” 2 Chamber view slice. The pose estimation module is especially a regressor for continuous variables (e.g., level of foreshortening) and a classifier for categorical variables (e.g., label). Various model architectures could be possible; however, a convolutional neural network (CNN) is preferred); and process at least one group of medical images from the multiple groups, wherein, during the processing, the at least one processor is configured to: segment, based on a machine-learned heart segmentation model, the heart in one or more medical images into multiple anatomical regions (Mihalef [0017], [0053]-[0055], [0072], where Mihalef teaches preferably, the 2D-slice images are segmented to get segmented contours of the heart, e.g., endocardium or epicardium. Additional features are also preferred, e.g., continuous features (e.g., coordinate-specific like origin, radius, angles) and/or categorical features (e.g., long axis, short axis, 2CH, 3CH, 4CH, apical, medial, basal); register two or more medical images from the at least one group of medical images, wherein the registration is performed based on a machine-learned image registration model pre-trained to compensate for a motion associated with the two or more medical images during the registration (Mihalef [0017], [0050]-[0053], [0072], [0079] where Mihalef teaches Regarding the pose estimation module, it is preferred that cardiac-centric coordinate system values, e.g., origin, angles, height, radius, and categorical tags, e.g., long axis, short axis, 2CH, 3CH, 4CH, apical, medial, basal, are used to train this module, that preferably has the architecture of a convolutional neural network (CNN); determine, based on a machine-learned pathology detection model, whether a medical abnormality exists in at least one of the multiple anatomical regions, (Mihalef [0017], [0050]-[0053],[0067], [0072], [0079], [0088], claim 1, where Mihalef teaches The output is preferably a 3D-strain of a 2D cross-section of the heart (e.g., 3D-arrows on a 2D-loop), however, the output could also be a 3D-strain of a 3D-region around a 2D-cross-section of the heart or a 3D-strain of a whole heart. So, the system always provides a 3D-strain, and especially added statistical associated info, e.g., mean and standard deviation, for an area including a 2D cross-section of the heart); and provide an indication of the determination (Mihalef [0024], [0026], [0067], [0107]claim 1, where Mihalef teaches the output is preferably a 3D-strain of a 2D cross-section of the heart (e.g., 3D-arrows on a 2D-loop), however, the output could also be a 3D-strain of a 3D-region around a 2D-cross-section of the heart or a 3D-strain of a whole heart. So, the system always provides a 3D-strain, and especially added statistical associated info, e.g., mean and standard deviation, for an area including a 2D cross-section of the heart). Although the context clearly show that the Mihalef teaches a corresponding apparatus to the system, method and computer program but he fails to explicitly use the word apparatus in the application. However, Soble in the same line of endeavor teaches a system, method and apparatus (Soble [0085]) wherein the view of the anatomy that the image illustrates is one selected from the group: a long axis view, a short axis view, an anteroposterior (AP) view, a lateral view, a 2D slice of a 3D image, a 2D projection of a 3D image, a 2D perspective rendering of a 3D image, or some combination thereof (claim 1). The acquired images may also be processed by the present invention through an Artificial Intelligence (machine learning or neural network) process so that the elements of the information are identified and tagged, and the resultant categorized results stored in a database such as the one having an exemplary structured shown in FIG. 6B (Soble [0073). Performing image segmentation component may be used to isolate one or more structures that may appear within an image. Examples of structure isolation information that may be developed through the use of the image segmentation component include the left cardiac border on a chest x-ray, the anterior leaflet of the mitral valve on an echocardiogram, the pituitary gland on an MRI, or a lung mass in a chest CT. For purposes of this application, structure isolation information may be anatomic or functional (such as mitral regurgitation by Doppler), and may be 2, 3, or 4-dimensional (Soble [0026]). invention relates generally to a system and methods for the display of medical information. More specifically, the invention is directed to a system and methods by which information regarding a subject, including that which may be characterized as images or data, may be analyzed and processed—such as according to one or more deconstruction steps appropriate for a clinical ontology chosen by the user in order to, for example, classify, identify and isolate patterns, sets, structures, features, or attributes within the information—and made accessible such as through the entry of one or more selections—of a topic, heading, and subheading within a medical report template developed according to the same clinical ontology -, and, by the entry of such one or more selections, developed into an efficient display (Soble [0002]). Therefore, taking the teachings of Mihalef and Soble as whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to have a system, method, computer programs and apparatus to perform the invention in order to further extend the application of the invention since a system, method, computer programs are already thought. Mihalef in view of Soble fails to explicitly teach wherein the at least one processor being configured to determine whether the medical abnormality exists in the at least one of the multiple anatomical regions comprises the at least one processor being configured to determine a tissue pattern or tissue parameter associated with the at least one of the multiple anatomical regions, and determine whether the medical abnormality exists in the at least one of the multiple anatomical regions based on the determined tissue pattern or tissue parameter. However, Mihalef teaches the strain measurement module could be based on a machine learning model or on a conventional algorithm (Mihalef [0025]). Burlina in the same line of endeavor teaches methods and systems to segment volumetric (3-dimensional or 3D) ultrasound (US) images or image data, such as to distinguish between tissue (e.g., tissue walls or surfaces), and blood and/or other anatomical features. Segmentation, as disclosed herein, may be implemented as a partially or fully-automated or machine-implemented process (Burlina [0111], [0127]-[0130]. [0158], [0162]-[0163]). Therefore, taking the teaching of Mihalef, Soble and Burlina as whole, it would have been obvious to one or ordinary skill in the art before the effective filing date of the application to use machine learning to segment the image of the heart based on the tissue pattern or parameters since each region of the heart is characterize by different type of tissue, tissue patterns or tissue parameters, order to increase the accuracy of the segmentation by properly distinguish each area Regarding claims 2 and 13: Mihalef in view of Soble and in view of Burlina teaches wherein the plurality of medical images of the heart includes at least one of a magnetic resonance (MR) image of the heart or a tissue characterization map of the heart (Mihalef [0003], [0040]; Soble [0006], where the combination teaches In one embodiment, 3D+time image sequences based on images of a real heart, especially ultrasound data or magnetic resonance imaging data, are used and provided for the method. For each received 3D+time image sequence, a corresponding 3D deformation field is generated by tracking the embodiment of the heart in the images and a corresponding 3D cardiac chamber mask as well as a cardiac chamber mesh is derived). Regrading claims 3 and 14: Mihalef in view of Soble teaches wherein the at least one processor being configured to classify the plurality of medical images into the multiple groups comprises the at least one processor being configured to detect, based on the machine-learned image classification model, one or more anatomical landmarks associated with a short axis of the heart in a subset of the plurality of medical images, and classify the subset of medical images as belonging to the first group (Mihalef [0017], [0053]; Soble [0024], where Mihalef and Soble teaches the pose estimation module estimates the slicing-pose attributes of a series of 2D-slice images, preferably using cardiac-centric coordinate system values (e.g., origin, angles, height, radius) and categorical tags (e.g., long axis, short axis, 2CH, 3CH, 4CH, apical, medial, basal…The pose estimation module is especially a regressor for continuous variables (e.g., level of foreshortening) and a classifier for categorical variables (e.g., label) ). Regarding claim 4: Mihalef in view of Soble teaches wherein the one or more anatomical landmarks include at least one of an mitral annulus or an apical tip (e.g., label) (Mihalef [0052], [0140], where Mihalef teaches For instance, considering the left ventricle (LV), three orthogonal axes of the reference system can be defined as the long axis of the left ventricle (line connecting the LV apex to the barycenter of the mitral annulus), the axis connecting the barycenter of the mitral annulus with the barycenter of the tricuspid valve, and a third axis orthogonal to the previous two). Regarding claims 5 and 15: Mihalef in view of Soble teaches wherein the one or more anatomical regions include a left ventricle of the heart and a right ventricle of the heart (Mihalef [0052], [0140]; Soble [0026], [0076], where the cited combination teaches For instance, considering the left ventricle (LV), “right ventricle (RV), three orthogonal axes of the reference system can be defined as the long axis of the left ventricle (line connecting the LV apex to the barycenter of the mitral annulus), the axis connecting the barycenter of the mitral annulus with the barycenter of the tricuspid valve, and a third axis orthogonal to the previous two). Regarding claims 6 and 16: Mihalef in view of Soble teaches wherein the one or more anatomical regions include multiple myocardial segments comprising one or more basal segments, one or more mid-cavity segments, and one or more apical segments (Mihalef [0017], [0053]; Soble [0026], claim 8, where the combination teaches For instance, a cutting plane orthogonal to the chamber long axis represents a short axis view; medial/apical/basal regions can be defined based on the point coordinates along the long axis, and correspondingly the cutting plane can be defined a medial/apical/basal short axis view). Regarding claims 7 and 17: Mihalef in view of Soble teaches wherein the machine-learned heart segmentation model is trained to detect, in the at least one group of medical images, one or more anatomical landmarks that indicate where a left ventricle of the heart intersects with a right ventricle of the heart, and segment the heart into the multiple myocardial segments based on the one or more anatomical landmarks (Mihalef [0052], [0140]; Soble [0026], [0076], Regarding claims 9 and 19: Mihalef in view of Soble and in view of Burlina teaches wherein the machine-learned pathology detection model is further trained to segment an area of the heart that is associated with the determined tissue pattern or tissue parameter from the at least one of the multiple anatomical regions (Mihalef [0017], [0053]; Burlina [0111], [0127]-[0130]. [0158], [0162]-[0163]). Regarding claim 11: Mihalef in view of Soble and in view of Burlina teaches wherein the indication comprises a segmentation of the medical abnormality or a report of the medical abnormality(Mihalef [0067], [0079], [0088], claim 1, where the combination teaches The output is preferably a 3D-strain of a 2D cross-section of the heart (e.g., 3D-arrows on a 2D-loop), however, the output could also be a 3D-strain of a 3D-region around a 2D-cross-section of the heart or a 3D-strain of a whole heart. So, the system always provides a 3D-strain, and especially added statistical associated info, e.g., mean and standard deviation, for an area including a 2D cross-section of the heart). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEDNEL CADEAU/Primary Examiner, Art Unit 2632 August 27, 2025
Read full office action

Prosecution Timeline

Oct 26, 2022
Application Filed
Mar 05, 2025
Non-Final Rejection — §103
Jun 10, 2025
Response Filed
Aug 27, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586241
POSITION DETERMINATION METHOD, DEVICE, AND SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12573052
METHOD AND APPARATUS FOR IMAGE SEGMENTATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573022
ANOMALY DETECTION FOR COMPONENT THROUGH MACHINE-LEARNING BASED IMAGE PROCESSING AND CONSIDERING UPPER AND LOWER BOUND VALUES
2y 5m to grant Granted Mar 10, 2026
Patent 12573076
POSITION MEASUREMENT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567178
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
91%
With Interview (+19.6%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 532 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month