Prosecution Insights
Last updated: April 19, 2026
Application No. 18/624,874

POWER EFFICIENT OBJECT TRACKING

Final Rejection §103§112
Filed
Apr 02, 2024
Examiner
CHIO, TAT CHI
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Apple Inc.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
610 granted / 836 resolved
+15.0% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
49 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 836 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 9/26/2025 have been fully considered but they are not persuasive. Applicant argues that the combination of Lee, Oshima, and Cervelli does not explicitly teach determining, by the system process, a power-efficient accuracy for tracking the type of the object. In response, the examiner respectfully disagrees. Lee teaches detecting an object within the frame. Detecting an object may refer to detecting a threshold condition related to an object in an image frame. The threshold condition may comprise a change in image data characteristics that may increase or decrease a difficulty of recognizing the object at the current settings. [0032]. Detecting the threshold condition may include extracting object information for the object from the first frame of image data. [0033]. Extracting object information may include recognizing the object. [0035]. Extracting articulating information may include identifying a background region of the frame that excludes the object. Identifying such background regions may permit those regions to be ignored during downstream processing, potentially simplifying downstream processing and/or improving tracking efficiency. In turn, improved processing may conserve power and extend power supply duration. [0036]. At 212, method 200 includes adjusting a setting in response to detecting the threshold condition in one or more frames of image data. [0037]. Adjusting a setting at 212 may include, at 214, adjusting a setting that changes a power consumption of an image acquisition device, in response to detecting the object. [0040]. Settings for collecting image data for selected portions of an image frame data may be configured to operate at reduced power levels (e.g. reduced illuminating settings and/or resolution settings) as a default state until an initial presence of the object is detected and/or recognized within a field of view of the image acquisition device. Once the initial presence of the object is detected, the settings may be adjusted so that power levels are increased to thereby obtain better data for object tracking. [0041]. Here, the types of the object are the background and the object identified in the image. A setting is adjusted according to whether it is the background or the identified object. Applicant argues that the combination of Lee, Oshima, and Cervelli does not explicitly teach receiving, at a system process of a device, a type of an object to tracked using one or more sensors of the device. In response, the examiner respectfully disagrees. Oshima teaches tracking target receiving unit that receives a specified tracking target that is tracked by the camera. [0081], [0115], [0120], [0125], and [0137]. As discussed above, Lee teaches identifying an object and background. The types of objects here are considered the background and the identified object. Oshima teaches a tracking target receiving unit that receives a specified tracking target. Thus, it is considered receiving a type, which is the object, not the background, to be tracked. Applicant argues that the combination of Lee, Oshima, and Cervelli does not explicitly teach providing, by an application at a device to a system process at the device, a type of an object to be tracked using one or more sensors of the device. In response, the examiner respectfully disagrees. Oshima teaches tracking target receiving unit that receives a specified tracking target that is tracked by the camera. [0081], [0115], [0120], [0125], and [0137]. As discussed above, Lee teaches identifying an object and background. The types of objects here are considered the background and the identified object. Oshima teaches a tracking target receiving unit that receives a specified tracking target. Thus, it is considered providing the type, which is the object, not the background, to be tracked. Applicant argues that the combination of Lee, Oshima, and Cervelli does not explicitly teach determining the power-efficient accuracy for tracking the type of object comprises providing the type of the object to a machine learning model at the device, the machine learning model having been trained to output a sensory accuracy responsive to receiving object type information. In response, the examiner respectfully disagrees. Robaina teaches the object recognitions may be performed using a variety of computer vision techniques. For example, the wearable system can analyze the images acquired by the outward-facing imaging system 464 (shown in FIG. 4) to perform scene reconstruction, event detection, video tracking, object recognition, object pose estimation, learning, indexing, motion estimation, or image restoration, etc. One or more computer vision algorithms may be used to perform these tasks. Non-limiting examples of computer vision algorithms include: Scale-invariant feature transform (SIFT), speeded up robust features (SURF), oriented FAST and rotated BRIEF (ORB), binary robust invariant scalable keypoints (BRISK), fast retina keypoint (FREAK), Viola-Jones algorithm, Eigenfaces approach, Lucas-Kanade algorithm, Horn-Schunk algorithm, Mean-shift algorithm, visual simultaneous location and mapping (vSLAM) techniques, a sequential Bayesian estimator (e.g., Kalman filter, extended Kalman filter, etc.), bundle adjustment, Adaptive thresholding (and other thresholding techniques), Iterative Closest Point (ICP), Semi Global Matching (SGM), Semi Global Block Matching (SGBM), Feature Point Histograms, various machine learning algorithms The object recognitions can additionally or alternatively be performed by a variety of machine learning algorithms (such as e.g., support vector machine, k-nearest neighbors algorithm, Naive Bayes, neural network (including convolutional or deep neural networks), or other supervised/unsupervised models, etc.), and so forth. Once trained, the machine learning algorithm can be stored by the wearable device. Some examples of machine learning algorithms can include supervised or non-supervised machine learning algorithms, including regression algorithms (such as, for example, Ordinary Least Squares Regression), instance-based algorithms (such as, for example, Learning Vector Quantization), decision tree algorithms (such as, for example, classification and regression trees), Bayesian algorithms (such as, for example, Naive Bayes), clustering algorithms (such as, for example, k-means clustering), association rule learning algorithms (such as, for example, a-priori algorithms), artificial neural network algorithms (such as, for example, Perceptron), deep learning algorithms (such as, for example, Deep Boltzmann Machine, or deep neural network), dimensionality reduction algorithms (such as, for example, Principal Component Analysis), ensemble algorithms (such as, for example, Stacked Generalization), and/or other machine learning algorithms. In some embodiments, individual models can be customized for individual data sets. For example, the wearable device can generate or store a base model. The base model may be used as a starting point to generate additional models specific to a data type (e.g., a particular user in the telepresence session), a data set (e.g., a set of additional images obtained of the user in the telepresence session), conditional situations, or other variations. In some embodiments, the wearable device can be configured to utilize a plurality of techniques to generate models for analysis of the aggregated data. Other techniques may include using pre-defined thresholds or data values. One or more object recognizers 708 can also implement various text recognition algorithms to identify and extract the text from the images. Some example text recognition algorithms include: optical character recognition (OCR) algorithms, deep learning algorithms (such as deep neural networks), pattern matching algorithms, algorithms for pre-processing, etc. [0104] – [0106]. The object recognizer(s) 708 can be used to recognize objects in the user's environment. As described with reference to FIG. 7, the object recognizer(s) 708 can apply computer vision algorithms (in addition to or in alternative to machine learning algorithms) to identify medical equipment, documents, faces, etc., in the user's environment. The wearable device can also attach semantic information to the objects. As further described with reference to FIG. 25, the wearable device can use an object recognizer to detect or track a surgical instrument or a medical device in a FOV of the wearable device or the user of the wearable device. Additionally, the wearable device can identify a medical device (e.g., an ultrasound probe) and connect to the device via a wired or a wireless network. For example, the wearable device can scan for messages broadcasted by network-enabled medical devices in its vicinity and wirelessly connect to such devices. The wearable device can receive data from the medical device and present information related to the received data to the wearer of the device (e.g., images from an imaging device, sensor data from a probe (e.g., thermometer), and so forth). In some embodiments, the wearable device may provide a user interface (UI) that permits the wearer (e.g., a surgeon) to access or control a medical device. Additionally or alternatively, the wearable device may include a near field communication (NFC) interface that is configured to communicate over a short range (e.g., about 10 cm) with an NFC enabled medical device to exchange information, identify each other, bootstrap to a wireless connection with higher bandwidth, etc. The NFC interface and the NFC enabled medical device may operate in passive or active modes. [0138]. To further clarify, Robaina teaches one or more object recognizers 708 can crawl through the received data (e.g., the collection of points) and recognize and/or map points, tag images, attach semantic information to objects with the help of a map database 710. [0102]. The wearable system can also supplement recognized objects with semantic information to give life to the objects. For example, if the object recognizer recognizes a set of points to be a door, the system may attach some semantic information (e.g., the door has a hinge and has a 90 degree movement about the hinge). If the object recognizer recognizes a set of points to be a mirror, the system may attach semantic information that the mirror has a reflective surface that can reflect images of objects in the room. As another example, the object recognizer may recognize a scalpel as belonging to a set of surgical tools for performing a certain type of surgery, for example, by comparing the recognized scalpel with a database of medical instruments used in that type of surgery. The medical instruments database may be stored locally in a data repository 260 in the surgeon's wearable device or in a remote data repository 264 (e.g., in the cloud, such as data store 1238 described with reference to FIG. 12). [0107]. The surgical instrument may be associated with semantic information. For example, the semantic information may include indications that the surgical instrument is part of an instrument set used for amputation. The semantic information can also include the functions of the surgical instrument, such as, e.g., stopping blood from spraying, stitching an open wound, etc. [0139]. The semantic information is considered to be the sensor accuracy because it provides a type of description of the recognized objects. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 21 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The terms “slow-moving,” “fast-moving,” “relatively large,” and “relatively small” in claim 21 is a relative term which renders the claim indefinite. The terms “slow-moving,” “fast-moving,” “relatively large,” and “relatively small” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-6, 8-13, 15-19, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2013/0141597 A1) in view of Oshima (US 2017/0328976 A1) and Cervelli et al. (US 2023/0385710 A1). Consider claim 1, Lee teaches a method, comprising: determining, by the system process, a power-efficient accuracy for tracking the type of the object ([0032] – [0047]); obtaining sensor data from the one or more sensors according to the determined power-efficient accuracy ([0053]). However, Lee does not explicitly teach receiving, at a system process of a device from an application at the device, a type of an object to be tracked using one or more sensors of the device; and providing object tracking information based on the sensor data to the application. Oshima teaches receiving, at a system process of a device from an application at the device, a type of an object to be tracked using one or more sensors of the device ([0081], [0115], [0120], [0125], [0137]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of receiving a type of an object to be tracked because such incorporation would facilitate continuous tracking of a target with high accuracy. [0012] – [0013]. Cervelli teaches providing, by the system process of the device in response to a request including the type of the object, object tracking information based on the sensor data to the application ([0037], [0084], [0109], Fig. 2C). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 2, the combination of Lee, Oshima, and Cervelli teaches modifying the power-efficient accuracy, by the system process, based on power information for the device ([0030] – [0047] of Lee); obtaining different sensor data from the one or more sensors according to the modified power-efficient accuracy ([0053] of Lee); and providing updated object tracking information based on the different sensor data for tracking of the object ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 3, Cervelli teaches the object tracking information comprises the sensor data ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 4, Cervelli teaches wherein receiving the type of the object comprises receiving the type of the object in the request from an application at the device ([0037], [0084], [0109], Fig. 2C), processing the sensor data having the determined power-efficient accuracy at the system process to generate the object tracking information ([0084], [0109], Fig. 2C of Cervelli); and providing the object tracking information from the system process to the application ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 5, Cervelli teaches generating, by the application at the device, virtual content for display at or near the object based on the object tracking information ([0084], [0109], Fig. 2C of Cervelli); and displaying the virtual content generated by the application, at or near the object based on the object tracking information, using a display of the device ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 6, Cervelli teaches displaying the virtual content using the display comprises displaying the virtual content with a portion of the display that corresponds to a direct view or a pass-through video view of the object ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 8, the combination of Lee, Oshima, and Cervelli teaches a memory ([0055] – [0060] of Lee); and one or more processors ([0055] – [0060] of Lee) configured to perform the method recited in claim 1 (see rejection of claim 1). Consider claim 9, claim 9 recites the device implementing the method recited in claim 2. Thus, it is rejected for the same reasons. Consider claim 10, claim 10 recites the device implementing the method recited in claim 3. Thus, it is rejected for the same reasons. Consider claim 11, claim 11 recites the device implementing the method recited in claim 4. Thus, it is rejected for the same reasons. Consider claim 12, claim 12 recites the device implementing the method recited in claim 5. Thus, it is rejected for the same reasons. Consider claim 13, claim 13 recites the device implementing the method recited in claim 6. Thus, it is rejected for the same reasons. Consider claim 15, the combination of Lee, Oshima, and Cervelli teaches a method, comprising: providing, by an application at a device to a system process at the device, a type of an object to be tracked using one or more sensors of the device ([0081], [0115], [0120], [0125], [0137] of Oshima); receiving object tracking information from the system process, the object tracking information having been obtained by the system process according to a power-efficient accuracy for tracking the type of the object ([0032] – [0047], [0053] of Lee); generating, by the application, virtual content for display at or near the object based on the object tracking information ([0084], [0109], Fig. 2C of Cervelli); and providing the virtual content from the application to the system process for display by a display of the device ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of receiving a type of an object to be tracked because such incorporation would facilitate continuous tracking of a target with high accuracy. [0012] – [0013]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 16, the combination of Lee, Oshima, and Cervelli teaches receiving, by the application from the system process, different object tracking information based on the type of the object and power information for the device ([0030] – [0047] and [0053] of Lee); and modifying the virtual content based on the different object tracking information (([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 17, Cervelli teaches the object tracking information comprises sensor data from the one or more sensors of the device ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 18, Cervelli teaches the object tracking information comprises processed object tracking information generated by the system process based on sensor data from the one or more sensors of the device ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 19, Cervelli teaches providing the virtual content for display using the display comprises providing the virtual content for display with a portion of the display that corresponds to a direct view or a pass-through video view of the object in a physical environment ([0084], [0109], Fig. 2C of Cervelli). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Consider claim 21, Cervelli teaches the type of the object comprises at least one of: a stationary object, a slow-moving object, a fast-moving object, a relatively large object, a relatively small object, or a portion of another object ([0037], [0084], [0109], Fig. 2C). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing object tracking information to the application because such incorporation would generate a graphical display of the tracked object. [0084]. Claim(s) 7, 14, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2013/0141597 A1) in view of Oshima (US 2017/0328976 A1), Cervelli et al. (US 2023/0385710 A1), and Robaina et al. (US 2023/0075466 A1). Consider claim 7, the combination of Lee, Oshima, and Cervelli teaches all the limitations in claim 1 and obtaining the sensor data from the one or more sensors according to the determined power-efficient accuracy comprises activating or deactivating one or more of the one or more sensors (image sensor 106 may include one or more video cameras, (e.g., one or more RGB, CMYK, gray scale, or IR cameras), for collecting color information from scene 104. [0017] – [0023]). However, the combination does not explicitly teach determining the power-efficient accuracy for tracking the type of the object comprises providing the type of the object to a machine learning model at the device, the machine learning model having been trained to output a sensor accuracy responsive to receiving object type information; Robaina teaches determining the power-efficient accuracy for tracking the type of the object comprises providing the type of the object to a machine learning model at the device, the machine learning model having been trained to output a sensor accuracy responsive to receiving object type information ([0104] – [0106], [0138]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing the type of the object to a machine learning model at the device because such incorporation would help identify medical equipment, documents, faces, etc. in the user’s environment. [0138]. Consider claim 14, claim 14 recites the device implementing the method recited in claim 7. Thus, it is rejected for the same reasons. Consider claim 20, Robaina teaches the power-efficient accuracy has been determined by a machine learning model at the device, the machine learning model having been trained to output a sensor accuracy responsive to receiving object type information ([0104] – [0106], [0138]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of providing the type of the object to a machine learning model at the device because such incorporation would help identify medical equipment, documents, faces, etc. in the user’s environment. [0138]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAT CHI CHIO whose telephone number is (571)272-9563. The examiner can normally be reached Monday-Thursday 10am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE J ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAT C CHIO/ Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Apr 02, 2024
Application Filed
Jun 24, 2025
Non-Final Rejection — §103, §112
Sep 26, 2025
Response Filed
Jan 23, 2026
Final Rejection — §103, §112
Mar 24, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587653
Spatial Layer Rate Allocation
2y 5m to grant Granted Mar 24, 2026
Patent 12549764
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12549845
CAMERA SETTING ADJUSTMENT BASED ON EVENT MAPPING
2y 5m to grant Granted Feb 10, 2026
Patent 12546657
METHODS AND SYSTEMS FOR REMOTE MONITORING OF ELECTRICAL EQUIPMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12549710
MULTIPLE HYPOTHESIS PREDICTION WITH TEMPLATE MATCHING IN VIDEO CODING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
90%
With Interview (+16.6%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 836 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month