Prosecution Insights
Last updated: April 19, 2026
Application No. 16/813,540

DIAGNOSTICS USING ONE OR MORE NEURAL NETWORKS

Non-Final OA §103
Filed
Mar 09, 2020
Examiner
HAUSMANN, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
8 (Non-Final)
76%
Grant Probability
Favorable
8-9
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
658 granted / 863 resolved
+14.2% vs TC avg
Strong +22% interview lift
Without
With
+21.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
886
Total Applications
across all art units

Statute-Specific Performance

§101
14.6%
-25.4% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 863 resolved cases

Office Action

§103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 6 February, 2026 has been entered. Response to Amendment Claims 1-30 are pending. Claims 1-30 are amended directly or by dependency on an amended claim. Response to Arguments Applicant’s arguments, see page 9, filed 6 February, 2026, with respect to the 35 USC 112a rejections of claims 1-30 along with accompanying amendments received on the same date have been fully considered and are persuasive. The 35 USC 112a rejections of claims 1-30 have been withdrawn. Applicant’s arguments, see page 10, filed 6 February, 2026, with respect to the 35 USC 112b rejections of claims 1-30 along with accompanying amendments received on the same date have been fully considered and are persuasive. The 35 USC 112b rejections of claims 1-30 have been withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 5-8, 11-14, 1-20, 23-26 and 29-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sorenson et al. (US 20180137244 A1) in view of Chandler et al. (US 20220036050 A1). Regarding claims 1, 7, 13, 19, and 25, Sorenson et al. disclose one or more processors, comprising processing circuitry to (processors, ASIC, [0097]), a system comprising one or more processors to (processors, ASIC, [0097]), a method comprising (method, [0052], [0246]), machine readable medium having stored thereon a set of instructions (software embodied on a non-transitory computer readable medium, [0246]), and medical image analysis system (artificial intelligence findings system, anatomical images, abstract, embodiments of the invention relate to medical image interpretation, [0002], processors, ASIC, [0097]) to cause one or more first neural networks (multi-sided platform which utilizes machine learning, deep learning and deterministic statistical methods (engines) running on medical image data, [0032], in-image analysis for medical image data such as convolutional neural network based on deep learning framework, [0199]) to: identify one or more objects comprising at least one type of feature depicted in one or more images stored to a shared memory of one or more graphics processing units (GPUs) The image processing server can receive image data. The image data can be received by the machine learning module. The image analysis module and the file analysis module can be integrated within the machine learning module or separate from the machine learning module, as shown in FIG. 16. When the image data is received by the machine learning module automatically or at the request of the client, the machine learning module can categorize the image data. The machine learning module can categorize the image data based on in-image analysis and/or metadata (e.g., DICOM headers or tags). The machine learning module can identify any image information from the image data such as the modality, orientation (e.g., axial, coronal, sagittal, off axis, short axis, 3 chamber view, or any combination thereof), anatomies (organs, vessels, bones, or any combination thereof), body section (e.g., head, next, chest, abdomen, pelvis, extremities, or any combination thereof), sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description, scanning protocol, sequences, options, flow data, or any combination thereof, [0197]); and determine a processing pipeline comprising one or more diagnostic models selected based, at least in part, on the at least one type of feature one or more objects identified by the one or more first neural networks in the one or more images (According to some embodiments, a machine learned workflow system can receive image data. The machine learned workflow system can review the image data and propose one or more clinical protocols or one or more workflows. The proposed clinical protocols or workflows for each image data can be determined based on in-image analysis and/or metadata of the image data. The machine learned workflow system can allow the user to replace, remove, or add clinical protocols or workflows, [0053], Ensemble engines may be combined, for example, one which finds the body part, another that segments it, another that labels the anatomy within it, and another that detects signs of leading diseases in that area, and finally another that can match these findings with clinical information resources and recommendations to provide assistance and direction to the physician, [0062], Only certain engines will apply according to what part of the body it is pertaining to, or according to what imaging modality type (the imaging procedure type) that is used. This will help the engine of engines mentioned above make a good choice and learn what works, [0063], A clinical protocol module can contain clinical protocols related to, for example, volumetric, CT Cardiac, CT Chest, CT Body, CT Head and Neck, MR Body, Body fusion, interventional radiology, maxilla facial, EVAR planning, TAVR planning, vessel analysis, Cardiac MR, Lung Segmentation, Liver Segmentation, Autobatch, any clinical protocols related to the medical areas described in this specification, or any combination thereof. Each clinical protocol can have one or more workflows (not shown). A workflow arranges activities into a process flow according to the order of performing each activity. Each of the activities in the workflow has a clear definition of its functions, the resource required in performing the activity, and the inputs received and outputs generated by the activity. Each activity in a workflow is referred to as a workflow stage, or a workflow element. Workflows can require specific image data to complete the workflows. Currently, users must select the specific image data to use in each workflow which is time consuming. Recommending image data for each workflow for the clinical protocol can reduce physician time. A workflow can include, but is not limited to, vessel analysis, calcium scoring, Time Dependent Analysis, CT/CTA subtraction, lobular decomposition, segmentation analysis and tracking, time volume analysis, flythrough, volumetric histogram, fusion CT/MR/PET/SPECT, multi-phase MR, parametric mapping, spherefinder, multi-kv, flow dynamic-MR, autobatch, ejection fraction, centerline extraction, straightened view, diameter and length measurements, CPR and axial renderings, V-Track mode for automated thin-slab MIP, measurement calculations, flow, perfusion, stress, rest, DE, and T1 mapping, any other workflow related to the medical areas described in this specification, any other workflows related to the clinical protocols described in this specification, or any combination thereof, [0182], The machine learning module can identify any image information from the image data such as the modality, orientation (e.g., axial, coronal, sagittal, off axis, short axis, 3 chamber view, or any combination thereof), anatomies (organs, vessels, bones, or any combination thereof), body section (e.g., head, next, chest, abdomen, pelvis, extremities, or any combination thereof), sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description, scanning protocol, sequences, options, flow data, or any combination thereof. Based on the image information from the image data, the machine learning module can recommend image data (i.e., series) for specific workflows. Based on the image information from the image data, the machine learning module can recommend workflows for specific image data, [0197]) [workflow interpreted as the processing pipeline] [diagnostic models interpreted as vessel analysis, calcium scoring, Time Dependent Analysis, CT/CTA subtraction, lobular decomposition, segmentation analysis and tracking, time volume analysis, flythrough, volumetric histogram, fusion CT/MR/PET/SPECT, multi-phase MR, parametric mapping, spherefinder, multi-kv, flow dynamic-MR, autobatch, ejection fraction, centerline extraction, straightened view, diameter and length measurements, CPR and axial renderings, V-Track mode for automated thin-slab MIP, measurement calculations, flow, perfusion, stress, rest, DE, and T1 mapping]. While Sorenson et al. disclose a GPU ([0097]), Sorenson et al. do not make explicit a shared memory of one or more graphics processing units (GPUs). Chandler et al. teach cause one or more first neural networks to: identify one or more objects comprising at least one type of feature depicted in one or more images stored to a shared memory of one or more graphics processing units (GPUs) (performing the gesture recognition operation comprises using a first processor of the one or more multi-threaded processors that implements a first three-dimensional convolutional neural network (3D CNN) to perform an optical flow operation on the information representative of the one or more areas of interest that is accessed from the shared memory, wherein the optical flow operation is enabled to recognize a motion associated with the gesture, [0006], analysis of medical images (e.g., MRI, X-ray, CT scan, video content, etc.), [0110], The system determines the value of the Texture ID attribute based on where the captured image data is stored (as a texture) in the shared GPU memory 3304, [0278] After Thread A 3731 finishes generating the image frame, Thread B 3732 and Thread C 3733 continue to process the frame using artificial intelligence techniques, [0295]). Sorenson et al. and Chandler et al. are in the same art of medical image analysis (Sorenson et al., [0002]; Chandler et al., [0110]). The combination of Chandler et al. with Sorenson et al. enables using a shared memory of one or more graphics processing units. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the shared memory of Chandler et al. with the invention of Sorenson et al. as this was known at the time of filing, the combination would have predictable results, and as Chandler et al. indicate “To address such performance penalty associated with the multiple copies, a customized code template can be generated to uniformly define attributes for all image data and allow access to the image data without any copies. For example, as shown in FIG. 33B, a custom template that characterizes data access and/or data formats, such as a custom class derived from OpenCv's cv::Mat class, can be defined to manage all captured image data uniformly. In this example, the custom template includes a Texture ID attribute to store the input as textures on the GPU shared memory 3304. The system determines the value of the Texture ID attribute based on where the captured image data is stored (as a texture) in the shared GPU memory 3304. The GPU can then translate the Texture ID value to an actual address value at which the image data is stored. Therefore, the GPUs can access the image without performing any copies. When UMA is enabled, the CPU can also access the image data via the Texture ID (or other similar indicators), thereby eliminating the need to copy the data back and forth between GPU(s) and the CPU” ([0278]) which will improve the computational efficiency and security of the system of Chandler et al. Regarding claims 2, 8, 14, 20, and 26, Sorenson et al. and Chandler et al. disclose the one or more processors, system, method, machine readable medium having stored thereon a set of instructions, and medical image analysis system of claims 1, 7, 13, 19, and 25. Sorenson et al. and Chandler et al. further indicate causing the selected one or more diagnostic models to perform one or more diagnostic processes on the one or more images stored to the shared memory of the one or more GPUs (Sorenson et al., such components can be implemented as software installed and stored in a persistent storage device, which can be loaded and executed in a memory by a processor (not shown) to carry out the processes or operations described throughout this application, [0097], machine learning module will be able to automatically select the relevant data to be processed by other machine learning module or read using a particular clinical protocol, [0183], For example, this allows many imaging studies that include many image acquisitions, many image series, and the use of manual end-user interaction to select and categorize these series prior to evoking image post-processing workflows. An engine of engines driven with the above types of intelligence and awareness about body parts, organs, anatomic segmentation and features of these, can be used to automatically perform this categorization. For example, the end user can refine the categorizations before or after such artificial intelligence work, and then the artificial intelligence engine will learn to perform these tasks better from that input. Eventually, it becomes totally automatic, [0229]; Chandler et al., shared memory GPU, [0278]). Regarding claims 5, 11, 17, 23, and 29, Sorenson et al. and Chandler et al. disclose the processor, system, method, machine readable medium having stored thereon a set of instructions, and medical image analysis system of claims 2, 8, 14, 20, and 26. Sorenson et al. further indicate the processing circuitry is further to provide results of the one or more diagnostic processes as the results are received from individual diagnostic processes (One or more results 250 may be generated and stored in persistent storage device 224 as a part of output data 222. In one embodiment, image processing engines 113-115 can be arranged in series, in which an output of a first image processing engine can be utilized as an input of a second image processing engine, as shown in FIG. 4A, [0078], According to another scenario, for example, a PACS server or CT, MRI, ultrasound, X-ray, or other imaging modality or information system can send studies to a first engine of the e-suite. After the first engine processes the studies, the output of findings from the first engine can be sent to a second engine and a third engine. The second engine and the third engine can run in parallel. The output of findings of the second engine and the third engine can be combined. The combined output of the second engine and the third engine can become the output of findings of the e-suite. Alternatively, the process may begin with multiple engines receiving the data for processing and these send their results to one or more other engines as described. The final output can be sent back to the source modality, or a PACS, or the medical data review system to be reviewed by a physician to confirm or deny the findings of the output of the e-suite ensemble, [0083], Similar engines which find similar findings can be run in parallel, in series, or any combination thereof can be different engines to detect the same finding. For example, a first engine, a second engine, and a third engine can be lung nodule detection engines, but they can be from different engine developers or different medical institutions. Such a configuration can enable comparing the findings from the three engines from different vendors, providing physicians with immediate access to multiple tools and a quick overview of the findings from each engine, immediately during the diagnostic interpretation process which occurs during medical data review., [0086]). Regarding claims 6, 12, 18, 24 and 30, Sorenson et al. and Chandler et al. disclose the one or more processors, system, method, machine readable medium having stored thereon a set of instructions, and medical image analysis system of claims 2, 8, 14, 20, and 26. Chandler et al. further indicate the processing circuitry is further to cause data for the one or more images to be stored to a shared memory, and wherein the one or more diagnostic processes are to access the data from the shared memory (performing the gesture recognition operation comprises using a first processor of the one or more multi-threaded processors that implements a first three-dimensional convolutional neural network (3D CNN) to perform an optical flow operation on the information representative of the one or more areas of interest that is accessed from the shared memory, wherein the optical flow operation is enabled to recognize a motion associated with the gesture, [0006], analysis of medical images (e.g., MRI, X-ray, CT scan, video content, etc.), [0110], The system determines the value of the Texture ID attribute based on where the captured image data is stored (as a texture) in the shared GPU memory 3304, [0278] After Thread A 3731 finishes generating the image frame, Thread B 3732 and Thread C 3733 continue to process the frame using artificial intelligence techniques, [0295]). Claims 3, 4, 9, 10, 15, 16, 21, 22, 27, and 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sorenson et al. (US 20180137244 A1) in view of Chandler et al. (US 20220036050 A1) as applied to claims 2, 8, 14, 20, and 26 above, further in view of Kishore et al. (US 20150160974 A1). Regarding claims 3, 9, 15, 21, and 27, Sorenson et al. and Chandler et al. disclose the one or more processors, system, method, machine readable medium having stored thereon a set of instructions, and medical image analysis system of claims 2, 8, 14, 20, and 26. Sorenson et al. further partly indicate the one or more diagnostic processes are assigned as jobs to one or more processing workflows, wherein the jobs are schedulable to be performed at least partially in sequence or in parallel, and wherein a performance of at least one of the jobs is able to be conditioned on an output of a prior job in a respective processing workflow (The selected image processing engines can be configured to a variety of configurations (e.g., in series, in parallel, or both) to perform a sequence of one or more image processing operations, [0041], The selected image processing engines can be configured to a variety of configurations (e.g., in series, in parallel, or both) to perform a sequence of one or more image processing operations., [0059] multiple image processing engines provided by multiple vendors can be configured in series, in parallel, or a combination of both to perform image processing operations, [0060] Processing engines having compatible inputs and outputs can be executed serially or in parallel (e.g., via multiple threads) to create a more accurate final output.[0075] In one embodiment, image processing engines 113-115 can be arranged in series, in which an output of a first image processing engine can be utilized as an input of a second image processing engine, as shown in FIG. 4A. Alternatively, image processing engines 113-115 can be arranged in parallel to perform the same or different image processing operations concurrently as shown in FIG. 4B [0078]) however another reference is added to make this more explicit. Kishore et al. teach one or more diagnostic processes are assigned as jobs to one or more processing workflows (implement an analytics workflow management system that can schedule jobs based on inferred dependencies, [0047], Workflow manager 302 can coordinate scheduling of jobs. For example, workflow manager 302 can receive job definitions via job creation user interface 304 and can determine when to schedule jobs, e.g., by inferring dependencies between jobs from the data tables (or other data objects) identified as being produced and consumed by various jobs. Examples of job scheduling processes that can be implemented in workflow manager 302 are described below, [0049], Once a job or workflow (a set of jobs with dependencies) is defined, workflow manager 302 can proceed to schedule the job, [0106]), wherein the jobs are schedulable to be performed at least partially in sequence or in parallel (a scheduling system can require that jobs A 102 and B 104 be completed before job C 106 begins. Jobs A 102 and B 104 can execute in any order, or concurrently, since neither depends on data generated by the other, [0028]), and wherein a performance of at least one of the jobs is able to be conditioned on an output of a prior job in a respective processing workflow (The system can schedule executions of the source and sink jobs such that the source job completes (or completes generation of the source data object) before the sink job is launched, abstract, [0022], For purposes of correctly executing all of the jobs in FIG. 1, a scheduling system can be used to make sure that jobs are executed in the correct order, which can be any order subject to the condition that a job that produces a data object completes execution (or at least completes production of the data object) before any job that consumes that data object begins execution (or at least begins consumption of the data object). Thus, for example, a scheduling system can require that jobs A 102 and B 104 be completed before job C 106 begins. Jobs A 102 and B 104 can execute in any order, or concurrently, since neither depends on data generated by the other. In some systems, e.g., as described below, the scheduling system can identify jobs that are ready to execute and can dispatch the jobs to a computing cluster for execution. Upon completion of a job, the computing cluster can inform the scheduling system, which can then determine which job (or jobs) to dispatch next, [0028], Based on job definitions provide by the analyst(s), workflow manager 302 can schedule jobs for execution by task runner 312. In scheduling jobs, workflow manager 302 can infer dependencies between jobs and a corresponding execution order for dependent jobs based on the analyst's specifications of data objects produced and consumed by various jobs, [0056]). Sorenson et al. and Kishore et al. are in the same art of performing multiple operations (Sorenson et al., [0083], [0086]; Kishore et al., abstract, [0056]). The combination of Kishore et al. with the invention of Sorenson et al. and Chandler et al. enables dependencies to be established so operations can automatically be carried out. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the workflow management described by Kishore et al. with the invention of Sorenson et al. and Chandler et al. as this was known at the time of filing, the combination would have predictable results, and as Kishore et al. indicate “Specifying dependencies on data objects rather than jobs can simplify the analyst's task. For example, the analyst defining a job that consumes data from a particular source data object does not need to know which job (or jobs) produce the source data object; the analyst can simply indicate that the source data object needs to be up-to-date before the consumer job is run” ([0007]), which will aid in the effective automation of the pipeline generation described by Sorenson et al. and Chandler et al.. Regarding claims 4, 10, 16, 22, and 28, Sorenson et al. and Chandler et al. disclose the processor, system, method, machine readable medium having stored thereon a set of instructions, and medical image analysis system of claims 3, 9, 15, 21, and 27. Sorenson et al. further indicate the one or more circuits and processors are further to generate at least one additional diagnostic process based, at least in part, upon a result of at least one of the generated diagnostic processes (In one embodiment, Ensemble engines may be combined, for example, one which finds the body part, another that segments it, another that labels the anatomy within it, and another that detects signs of leading diseases in that area, and finally another that can match these findings with clinical information resources and recommendations to provide assistance and direction to the physician, [0062] After the first engine processes the studies, the output of findings from the first engine can be sent to a second engine and a third engine. The second engine and the third engine can run in parallel. The output of findings of the second engine and the third engine can be combined. The combined output of the second engine and the third engine can become the output of findings of the e-suite. Alternatively, the process may begin with multiple engines receiving the data for processing and these send their results to one or more other engines as described [0083] For example, image identification engine 273, based on the new image data 272 and based on the identified for study images of patient anatomical structures, anatomical anomalies and anatomical features within the new image data, calls additional finding engines 275 to produce findings. Additional findings engines 275 are selected to be called by image identification engine 273 is based on the identified for study images of patient organs, body parts, and features within the new image data 272 and based upon the expertise of each of the additional findings engine 275. This allows an engine or engine of engines to process one or more medical imaging studies of a patient to determine the organs, body parts or even features found in the body parts or organs and even classifiers of these features, and uses this information to select pertinent other engines that can be run on all of these, and combinations of the images to provide precision-application of engines to image data. [0227]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M ENTEZARI HAUSMANN whose telephone number is (571)270-5084. The examiner can normally be reached 10-7 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent M Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE M ENTEZARI HAUSMANN/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Mar 09, 2020
Application Filed
Mar 02, 2022
Non-Final Rejection — §103
Sep 06, 2022
Response Filed
Sep 13, 2022
Final Rejection — §103
Mar 20, 2023
Request for Continued Examination
Mar 22, 2023
Response after Non-Final Action
Apr 06, 2023
Non-Final Rejection — §103
Jun 20, 2023
Interview Requested
Jun 29, 2023
Examiner Interview Summary
Jun 29, 2023
Applicant Interview (Telephonic)
Oct 11, 2023
Notice of Allowance
Jan 10, 2024
Request for Continued Examination
Jan 17, 2024
Response after Non-Final Action
Feb 05, 2024
Non-Final Rejection — §103
Apr 22, 2024
Interview Requested
May 02, 2024
Examiner Interview Summary
May 02, 2024
Applicant Interview (Telephonic)
Aug 12, 2024
Response Filed
Sep 05, 2024
Final Rejection — §103
Sep 30, 2024
Interview Requested
Mar 06, 2025
Request for Continued Examination
Mar 12, 2025
Response after Non-Final Action
Mar 18, 2025
Non-Final Rejection — §103
May 09, 2025
Interview Requested
May 16, 2025
Examiner Interview Summary
May 16, 2025
Applicant Interview (Telephonic)
Jul 10, 2025
Response Filed
Oct 03, 2025
Final Rejection — §103
Dec 21, 2025
Interview Requested
Dec 29, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Examiner Interview Summary
Feb 06, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602775
INTERPOLATION OF MEDICAL IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602793
Systems and Methods for Predicting Object Location Within Images and for Analyzing the Images in the Predicted Location for Object Tracking
2y 5m to grant Granted Apr 14, 2026
Patent 12602949
SYSTEM AND METHOD FOR DETECTING HUMAN PRESENCE BASED ON DEPTH SENSING AND INERTIAL MEASUREMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597261
OBJECT MOVEMENT BEHAVIOR LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12597244
METHOD AND DEVICE FOR IMPROVING OBJECT RECOGNITION RATE OF SELF-DRIVING CAR
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

8-9
Expected OA Rounds
76%
Grant Probability
98%
With Interview (+21.6%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 863 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month