Prosecution Insights
Last updated: April 19, 2026
Application No. 18/125,371

METHOD OF PREDICTING A POSITION OF AN OBJECT AT A FUTURE TIME POINT FOR A VEHICLE

Final Rejection §103§112
Filed
Mar 23, 2023
Examiner
CROCKETT, JOSHUA BRIGHAM
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Ewha University - Industry Collaboration Foundation
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
13 granted / 18 resolved
+10.2% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18/125,371 (the instant application), filed on 03/23/2023. Response to Arguments Claims 1, 3, 11, and 12 have been amended. Claim 7 has been canceled. Claims 1-6 and 8-12 are pending in this action. Applicant’s arguments, see pg. 8-9 section II, filed 12 November 2025, with respect to the objections to the drawings have been fully considered and are persuasive. The objections of the drawings have been withdrawn. Applicant’s arguments, see pg. 9 section III, filed 12 November 2025, with respect to the objection to the specification have been fully considered and are persuasive. The objection of the specification has been withdrawn. Applicant’s arguments, see pg. 9-10, filed 12 November 2025, with respect to the rejections of claims 11-12 under 35 U.S.C. 112(b) have been fully considered and are partially persuasive. The claims were amended to correct the issue relating to the “plurality of hypotheses” therefore the rejection of claim 11 under 35 U.S.C. 112(b) has been withdrawn. However, regarding the second rejection of claim 12 under 35 U.S.C. 112(b), the applicant amended the claim to remove the words “of the”. However, the claim remains unclear because it is unclear whether “one or more video images extracted at the current time point t” are a new item or are the same item as the “a video image extracted at a current time point t” of claim 10. Therefore, the rejection of claim 12 under 35 U.S.C. 112(b) is maintained. Applicant’s arguments, see pg. 10-13, filed 12 November 2025, with respect to the rejections of claims 1 and 8-12 under 35 U.S.C. 102 and claims 2-7 under 35 U.S.C. 103 have been fully considered and are persuasive. Specifically, the applicant argues that Scheel et al. ("Tracking Multiple Vehicles Using a Variational Radar Model" the full reference is included on the PTO-892 included with the action filed on 12 August 2025; hereafter, Scheel) does not disclose using an output value for a prediction position of an object at a future time point of a LiDAR model based on the LiDAR information. The examiner agrees. Therefore, the rejection has been withdrawn. The examiner notes that compared to the original claim 7, the amended claim 1 is more narrow due to the inclusion of the wording “for a prediction position of the object at the future time point” and accordingly the scope of the claim has changed from the original claim 7. However, upon further consideration, a new ground of rejection is made in view of McGill et al. (US 20200089246 A1, hereafter, McGill). McGill discloses: wherein generating the mixture model includes generating the mixture model by mixing the plurality of hypotheses for a prediction position of the object at the future time point ([0048] and Fig. 5, the plural additional expert predictors 540 are understood as a plurality of hypotheses for a prediction of the object at the future time point because they predict vehicle trajectories) and an output value for a prediction position of the object at the future time point ([0048] and Fig. 5, the variational trajectory predictor output, i.e. LiDAR based hypotheses, is merged with the plurality of hypotheses in the mixture predictor 570. [0065] and Fig. 11, the variational trajectory predictor outputs are GMM parameters which are understood as [0058] trajectory predictions, i.e. future positions) of a LiDAR model based on LiDAR information ([0064] and Fig. 11, the output of the variational trajectory predictor are based on LiDAR model 1140 based on LiDAR data 1110). Therefore, the rejections in view of the prior art are maintained. The full rejection, including motivations to combine, is included in the section “Claim Rejections - 35 USC § 103” below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4-6 and 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 4, it is unclear whether “a LiDAR model based on the LiDAR information” of claim 4 is another LiDAR model or if it is the same LiDAR model of claim 1. If it is a different model the examiner recommends amending the claim to clarify indicating it as a separate model with wording such as “another” or “second” when describing the model. If it is the same model of claim 1, the examiner recommends amending the claim to read “the LiDAR model based on the LiDAR information”. For the purpose of examination, the examiner interprets it as broadly as the same model or another model. Claims 5-6 are dependent on claim 4 and are rejected for not correcting the ambiguity of claim 4. Regarding claim 12, claim 12 is unclear because it is unclear whether “one or more video images extracted at the current time point t” are a new item or are the same item as the “a video image extracted at a current time point t” of claim 10. The confusion is caused because the claims have very similar wording, “video image extracted at a current time point t”, yet in claim 12 “one or more” videos are claimed while in claim 10 only one video is claimed. If the applicant intends for these videos to be separate items then the examiner recommends amending the language “video image extracted at a current time point t” in either claim 10 or claim 12 in order to clearly differentiate the items. If the applicant intends for these video to be the same item then the examiner recommends that the applicant amend claim 12 line 3 to read “are derived from the video image extracted at the current time point t”. For the purpose of examination the examiner is applying the latter interpretation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over Makansi et al. (US 20230154198 A1; hereafter, Makansi) in view of McGill et al. (US 20200089246 A1, hereafter, McGill). Regarding claim 1, Makansi discloses: A method of predicting a position of an object at a future time point in a vehicle ([0012] a method for vehicle centered future prediction in a driving environment), the method comprising: extracting, by a processor ([0012] and claim 1, the method is "computer-implemented" which a person of ordinary skill in the art would understand as performing the method by a processor), a video image ([0041] inputs include a collection of past images, which is understood as a video image) acquired through a camera of the vehicle ([0012] the driver assistance system is equipped with a camera which is understood as a camera of the vehicle); extracting, by the processor, the video image as a semantic segmentation image ([0041] inputs include a collection of past semantic segmentation images which are understood as a video. [0012] the semantic segmentation is of the observed image, i.e. of the video image); extracting, by the processor, a mask image ([0041] input includes a mask image) imaging an attribute ([0041] a mask image is commonly understood in the art to indicate segmentation of object(s) in an image. Segmenting around an object is inherently understood to be performed based on attributes of the object such as color, edges, shape, or other attributes. For evidence of this, refer to the website cited in [0041] of Makansi’s disclosure <http://viso.ai/deep-learning/mask-r-cnn/>) and position information ([0041] a mask image is commonly understood in the art to indicate segmentation of object(s) in an image. Segmenting around an object in an image inherently indicates the position of the object in the image by pixel positions. For evidence of this, refer to the website cited in [0041] of Makansi’s disclosure <http://viso.ai/deep-learning/mask-r-cnn/>) of an object present in the video image ([0041] "masks of the object of interest" is understood as of an object present in the video image); mixing, by the processor, the video image, the semantic segmentation image, the mask image ([0041] the video image, the semantic segmentation image, and the mask image are input into the future localization network (FLN) which is understood as mixing), and ego-motion information of the vehicle ([0041] the ego-motion is input into the FLN with the above images and is understood as mixing); predicting, by the processor, a position distribution of the object ([0041] the FLN outputs bounding boxes which are understood as indicating the position distribution of the object) for deriving a plurality of hypotheses ([0041] the FLN determines hypotheses) for a prediction position of the object at the future time point ([0041] the hypotheses are "for localization of a given object of interest at time t+Δt" which is understood as predicting the position, by localization, at a future time point, t+Δt); performing, by the processor, a fitting ([0041] fitting is performed) using learned data ([0041] fitting is performed by a fitting network which is understood as using learned data given that the broadest reasonable interpretation of learned data includes machine learning and the broadest reasonable interpretation of a fitting network includes machine learning) with respect to the plurality of hypotheses derived by predicting the position distribution of the object ([0041] "the RTN output in the form of the RM (i.e. bounding boxes hypotheses at time t+Δt)", the position hypotheses are input for the generation of the mixture model which is understood as fitting with respect to the hypotheses); and generating, by the processor, a mixture model ([0041] a mixture model, specifically a Gaussian mixture distribution, is determined. See also [0059]), Makansi does not disclose expressly that the mixture model is generated by mixing the plurality of hypotheses for a prediction position of the object at the future time point and an output value for a prediction position of the object at the future time point of a LiDAR model based on LiDAR information. McGill discloses: wherein generating the mixture model includes generating the mixture model by mixing the plurality of hypotheses for a prediction position of the object at the future time point ([0048] and Fig. 5, the plural additional expert predictors 540 are understood as a plurality of hypotheses for a prediction of the object at the future time point because they predict vehicle trajectories) and an output value for a prediction position of the object at the future time point ([0048] and Fig. 5, the variational trajectory predictor output, i.e. LiDAR based hypotheses, is merged with the plurality of hypotheses in the mixture predictor 570. [0065] and Fig. 11, the variational trajectory predictor outputs are GMM parameters which are understood as [0058] trajectory predictions, i.e. future positions. The mixed output of [0048] and Fig. 5 is understood as a mixture model because the term “model” is given the broad interpretation of a representation of an occurrence in real space, such as a trajectory prediction of something happening outside of a vehicle) of a LiDAR model based on LiDAR information ([0064] and Fig. 11, the output of the variational trajectory predictor are based on LiDAR model 1140 based on LiDAR data 1110. Note that the term “LiDAR model” is being granted a broad interpretation which includes LiDAR modeling an object, such as a LiDAR image of an object, and/or a machine learning model which utilizes LiDAR data in some way during its processing). Makansi and McGill are combinable because they are from the same field of endeavor of predicting future positions of objects around a vehicle (Makansi, [0001] and [0003]; McGill, [0027]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the mixture model of McGill with the invention of Makansi. The motivation for doing so would have been that “mixture predictor 570 selects as the most likely predicted vehicle trajectory the one having a confidence score 560 indicating the highest level of confidence among the candidate predicted trajectories” (Makansi, [0051]). In other words, mixing hypotheses, vehicle trajectories, allows the best or most accurate one to be selected for improved predictions. Therefore, it would have been obvious to combine McGill with Makansi to obtain the invention as specified in claim 1. Regarding claim 8, Makansi in view of McGill discloses the subject matter of claim 1. Makansi further discloses: wherein the ego-motion information of the vehicle comprises information corresponding to a current time point t and a future time point (t+Δt) ([0041] “ego-motion from time t to time t+Δt”). Regarding claim 9, Makansi in view of McGill discloses the subject matter of claim 1. Makansi further discloses: wherein the video image ([0041] the video image is "past images from time t−δt to time t"), the semantic segmentation image ([0041] the semantic segmentation image is "past semantic maps of dynamic environment from time t−δt to time t"), and the mask image ([0041] the mask image is "past masks of the object of interest from time t−δt to time t") are extracted for a current time point t and a plurality of past time points (see the teaching of each image above showing that each image is collected for a current time point and past time points). Regarding claim 10, Makansi in view of McGill discloses the subject matter of claim 1. Makansi further discloses: prior to extracting the video image acquired through the camera of the vehicle (prior to the steps of stage (c) in [0041], see the mapping of claim 1 above, the method performs stage (a) in [0037]), predicting a position of the object ([0037] static segmentation is performed which is understood as predicting a position of the object), wherein predicting the position of the object includes deriving a plurality of hypotheses ([0037] the segmentation, predicting the position, is done by generating multiple bounding box hypotheses which are understood as a plurality of hypotheses) from a video image extracted at a current time point t ([0037] this step is performed for a current time t. [0012] "observing at a given time step (t) . . . an image from the driving environment; obtaining a semantic map of static elements in the observed image;" the image at time t is then segmented during the above position prediction step) acquired through the camera of the vehicle ([0012] "observing at a given time step (t) through an egocentric vision of the camera," the image is obtained through the camera of the vehicle). Regarding claim 11, Makansi in view of McGill discloses the subject matter of claim 10. Makansi further discloses: wherein mixing the video image, the semantic segmentation image, the mask image, and the ego-motion information of the vehicle further comprises mixing the plurality of hypotheses of the predicting the position of the object ([0041] the reachability transfer network (RTN) output, which includes bounding box hypotheses [0039], are input into the FLN with the video image, the segmentation image, the mask image, and the ego-motion information which is understood as mixing). Regarding claim 12, Makansi in view of McGill discloses the subject matter of claim 10. Makansi further discloses: wherein predicting the plurality of hypotheses of the predicting the position of the object are derived from one or more video images extracted at the current time point t acquired through the camera of the vehicle ([0037] and Fig. 1, the RPN outputs include hypotheses. The hypotheses are from the current time point t. [0039] the RPN outputs are input into the RTN in the form of bounding box hypotheses. Therefore, the hypotheses are derived from videos at the current time point), the semantic segmentation image extracted at the current time point t ([0039] the RTN inputs include the semantic segmentation at time t), and the ego-motion information of the vehicle ([0039] the RTN inputs include the ego-motion information of the vehicle). Claims 2-5 are rejected under 35 U.S.C. 103 as being unpatentable over Makansi et al. (US 20230154198 A1; hereafter, Makansi) in view of McGill et al. (US 20200089246 A1, hereafter, McGill) in further view of Smolyanskiy et al. (US 20210150230 A1; hereafter, Smolyanskiy). Regarding claim 2, Makansi in view of McGill discloses the subject matter of claim 1. Makansi in view of McGill does not disclose expressly that the video information is a wide view image obtained by stitching two or more video images together. Smolyanskiy discloses: wherein the video image information comprises a wide view image ([0118] a composite RGB image is understood as a wide view image) obtained by extracting and stitching two or more video image information acquired through the camera of the vehicle ([0118] the composite RGB image is formed by stitching multiple images together). Smolyanskiy is combinable with Makansi in view of McGill because they are from the same field of endeavor of detecting objects for an autonomous machine (Smolyanskiy, [0006]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the wide view image of Smolyanskiy with the invention of Makansi in view of McGill. The motivation for doing so would have been because cameras having different fields of view may have their images merged to provide more information for the system (Smolyanskiy, [0143] such as front view cameras to help identify forward facing paths and obstacles and [0146] side view cameras to help provide an occupancy grid and collision warnings). Therefore, it would have been obvious to combine Smolyanskiy with Makansi in view of McGill to obtain the invention as specified in claim 2. Regarding claim 3, Makansi in view of McGill in further view of Smolyanskiy discloses the subject matter of claim 2. Makansi in view of McGill does not disclose expressly that the wide view image is an RGB image and that the method includes predicting routes using a multi view of RGB image and LiDAR information in an egocentric view. Smolyanskiy discloses: the wide view image is an RGB two-dimensional (2D) image ([0118] a composite RGB image is understood as a wide view image and is understood as 2-D unless expressly stated otherwise), and the method includes predicting routes ([0121] object may be tracked frame to frame over time. This is understood as predicting a route of the object) using a multi-view synthesizing the RGB 2D image and LiDAR information ([0122] the image data and LiDAR data may be linked which is understood as a multi-view synthesis) based on an egocentric view ([0043] and Fig. 16 A, the sensors are sensors of an ego-object, e.g. a car, therefore, the view generated by them is an egocentric view). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the LiDAR information synthesized with RGB information of Smolyanskiy with the invention of Makansi in view of McGill. The motivation for doing so would have been “In some embodiments, linked annotations or object tracks from different types of sensor data may be used as inputs to object detection and tracking processes to track the movement of annotated objects from frame to frame over time with improved accuracy” (Smolyanskiy, [0122]). Therefore, it would have been obvious to combine Smolyanskiy with Makansi in view of McGill to obtain the invention as specified in claim 3. Regarding claim 4, Makansi in view of McGill in further view Smolyanskiy discloses the subject matter of claim 3. Makansi further discloses: wherein the mixture model is generated ([0041] a mixture model, specifically a Gaussian mixture distribution, is determined. See also [0059]) by mixing output values of an RGB 2D model based on the video image, the semantic segmentation image, and the mask image ([0041] the video image (understood as an RGB image), the semantic segmentation image, and the mask image are input into the future localization network (FLN) which is understood as mixing) Makansi does not disclose expressly that LiDAR information is mixed with the other images. McGill discloses: and a LiDAR model based on the LiDAR information ([0064] and Fig. 11, LiDAR model 1140 is based on LiDAR data 1110. Note that the term “LiDAR model” is being granted a broad interpretation which includes LiDAR modeling an object, such as a LiDAR image, and/or a machine learning model which utilizes LiDAR data in some way during its processing). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the LiDAR information of McGill with the invention of Makansi. The motivation for doing so would have been that “mixture predictor 570 selects as the most likely predicted vehicle trajectory the one having a confidence score 560 indicating the highest level of confidence among the candidate predicted trajectories” (Makansi, [0051]). In other words, mixing hypotheses, vehicle trajectories, allows the best or most accurate one to be selected for improved predictions. Therefore, it would have been obvious to combine McGill with Makansi to obtain the invention as specified in claim 4. Regarding claim 5, Makansi in view of McGill in further view of Smolyanskiy discloses the subject matter of claim 4. Makansi further discloses: further comprising generating a Gaussian mixture probability distribution using the mixture model ([0041] the mixture model is a Gaussian mixture distribution). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Makansi et al. (US 20230154198 A1; hereafter, Makansi) in view of McGill et al. (US 20200089246 A1, hereafter, McGill) in further view of Smolyanskiy et al. (US 20210150230 A1; hereafter, Smolyanskiy) and of Wen et al. ("Three-Attention Mechanisms for One-Stage 3-D Object Detection Based on LiDAR and Camera" the full reference is contained on the PTO-892 submitted with the action filed 12 August 2025; hereafter, Wen). Regarding claim 6, Makansi in view of McGill in further view of Smolyanskiy discloses the subject matter of claim 5. Makansi in view of McGill in further view of Smolyanskiy does not disclose expressly that the predicting the position of an object includes synthesizing an image vector and a LiDAR vector using a deep learning attention mechanism. Wen discloses: wherein predicting the position distribution of the object (pg. 6657 col. 2 para. 3, the system creates region of interest (ROI) proposals which are understood as object position predictions) PNG media_image1.png 64 346 media_image1.png Greyscale includes synthesizing a final vector from the video image (pg. 6659 col. 1 para. 1, the ROI is determined using a vector b which is from Fi, the RGB image) and a final vector from the LiDAR information (pg. 6659 col. 1 para. 1, the ROI is determined using a vector a which is from Fb, the LiDAR information) PNG media_image2.png 154 346 media_image2.png Greyscale using a deep learning-attention mechanism (pg. 6659 col. 1 para. 2, the vectors are fused using an attention mechanism). Wen is combinable with Makansi in view of McGill in further view of Smolyanskiy because it is from the same field of endeavor of object detection around a vehicle (Wen, pg. 6655 col. 1 para. 1). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the attention mechanism of Wen with the invention of Makansi in view of McGill in further view of Smolyanskiy. The motivation for doing so would have been that “The RA [i.e. attention] mechanism weights the paired BEV [i.e. LiDAR] ROIs and RGB image ROIs first and then fuses them using the addition operation. This gives more weight to important features” (Wen, pg. 6656 col. 1 para. 5). Therefore, it would have been obvious to combine Wen with Makansi in view of McGill in further view of Smolyanskiy to obtain the invention as specified in claim 6. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 11810364 B2 , RoyChowdhury et al., discloses a system which detects damage in the road from image data and from LiDAR data and then combines the detections. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA B CROCKETT whose telephone number is (571)270-7989. The examiner can normally be reached Monday-Thursday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA B. CROCKETT/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Mar 23, 2023
Application Filed
Aug 08, 2025
Non-Final Rejection — §103, §112
Nov 12, 2025
Response Filed
Feb 05, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592060
ARTIFICIAL INTELLIGENCE DEVICE AND 3D AGENCY GENERATING METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12587704
VIDEO DATA TRANSMISSION AND RECEPTION METHOD USING HIGH-SPEED INTERFACE, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12567150
EDITING PRESEGMENTED IMAGES AND VOLUMES USING DEEP LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561839
SYSTEMS AND METHODS FOR CALIBRATING IMAGE SENSORS OF A VEHICLE
2y 5m to grant Granted Feb 24, 2026
Patent 12529639
METHOD FOR ESTIMATING HYDROCARBON SATURATION OF A ROCK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+27.5%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month