Prosecution Insights
Last updated: April 19, 2026
Application No. 17/659,516

DATA ANNOTATION METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM

Final Rejection §103
Filed
Apr 18, 2022
Examiner
SPRAUL III, VINCENT ANTON
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
4 (Final)
59%
Grant Probability
Moderate
5-6
OA Rounds
4y 6m
To Grant
94%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
20 granted / 34 resolved
+3.8% vs TC avg
Strong +35% interview lift
Without
With
+34.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
30 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding the rejection of claims under 35 U.S.C. 103, Applicant’s arguments are directed towards amended portions of the claims which have not been previously examined. New grounds of rejection for these claims are provided below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6-7, and 11-12 rejected under 35 U.S.C. 103 over Brower, US Pre-Grant Publication No. 2021/0027103 (hereafter Brower) in view of Popov et al., US Pre-Grant Publication No. 2021/0156960 (hereafter Popov) and Rusinkiewicz et al., “Efficient Variants of the ICP Algorithm,” 2001, doi:10.1109/IM.2001.924423 (hereafter Rusinkiewicz). Regarding claim 1 and analogous claims 6 and 11: Brower teaches: “A computer-implemented method for data annotation, comprising”: Brower, paragraph 0004, “More specifically, systems and methods [a method] are disclosed that leverage object detections made by machine learning [computer-implemented] models to automatically generate new ground truth data for training or retraining the machine learning model or another machine learning model for accurate detection and identification of objects from a variety of perspectives [for data annotation]”; Brower, paragraph 0019, “FIG. 7 is an example block diagram for an example computing device suitable for implementation of embodiments of the present disclosure”; Brower, paragraph 0082, “Computer-storage media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device 700. As used herein, computer storage media does not comprise signals per se.” “acquiring a detection model, the detection model being trained by using sensor data manually annotated as startup data”: Brower, paragraph 0072, “The method 600, at block B602, includes training a machine learning model [a detection model] with a first set of ground truth data associated with first image data captured at a first perspective. For example, the machine learning model(s) 104 may be trained based on ground truth data [manually annotated as startup data, interpreted as data that is annotated or labeled outside of, or prior to, the method’s training process] 110 associated with first sensor data [sensor data] 102 including image data captured at a first perspective.” “performing offline obstacle detection on to-be-annotated sensor data by using the detection model, the startup data and the to-be-annotated sensor data being a same type of sensor data”: Brower, paragraphs 0073-0074, “The method 600, at block B604, includes after the training, applying, to the machine learning model, second image data captured at a second perspective different from the first perspective [the startup data and the to-be-annotated sensor data being a same type of sensor data]. For example, after training the machine learning model(s) 104 with first sensor data 102, second sensor data 122 may be applied to the machine learning model(s) 104. The second sensor data 104 may be captured at a second perspective different from the first perspective. The method 600, at block B606, includes computing, by the machine learning model and using the second image data, a bounding label for an object in an image from a sequence of images represented by the second image data corresponding to an instance in which the object is detected [performing offline obstacle detection on to-be-annotated sensor data by using the detection model, offline interpreted as not requiring real-time processing].” “performing offline obstacle tracking and matching according to detection results to obtain obstacle trajectory information; and modifying the detection results according to the obstacle trajectory information, and taking modified detection results as required annotation results”: Brower, paragraph 0075, “The method 600, at block B608, includes determining, using an object tracking algorithm and based on the bounding label, locations of the object in additional images of the sequence of images. For example, object tracking 134 may be used to track locations of the object in some of the undetected object frame(s) 132 based on the second object detections 124 in the detected object frame(s) 130 [performing offline obstacle tracking and matching according to detection results to obtain obstacle trajectory information, and modifying the detection results according to the obstacle trajectory information, interpreted as using the detection model to predict the future location of objects, and including these predicted locations in the results; offline interpreted as not requiring real-time processing].” “wherein the step of performing obstacle tracking and matching according to detection results to obtain obstacle trajectory information comprises: for the to-be-annotated sensor data, performing a first round of obstacle tracking and matching in chronological order and a second round of obstacle tracking and matching in reverse chronological order according to the detection results, and determining the obstacle trajectory information by combining tracking and matching results of the two rounds”: Brower, paragraph 0041, “In some examples, the object tracking 134 may be performed from a detected object frame 130 (e.g., using the bounding shape of the detected object) to track the object forwards [a first round of obstacle tracking and matching in chronological order] and/or backwards [a second round of obstacle tracking and matching in reverse chronological order]-in sequence-to identify a location of the object in one or more of the undetected object frames 132. For example, with respect to FIG. 3A, assume that the image 302 is captured first, then the image 304, then the image 306, in sequence (where each of the images 302, 304, and 306 are represented by the second sensor data 122). In such an example, and because the machine learning model(s) 104 may be trained to detect objects at the first perspective of the first sensor data 102, only the image 306 may be included in the detected object frames 130 and the images 302 and 304 may be included in the undetected object frames 132. As such, the detections ( e.g., represented by the bounding shapes 312C and 314C of the vehicle 310C and the pedestrian 320C, respectively) may be used to perform object tracking 134, in reverse order ( e.g., from the image 306, to the image 304, and then to the image 302), to generate new object labels 140 through label generation 136 for the vehicle 310 and the pedestrian 320 in each of the images 302 and 304 (e.g. , as illustrated in FIG. 3B). The object labels generated based on the predictions of the second object detections 124 by the machine learning model (s) 104 may be carried over to the object detected within the undetected object frame(s) 132 predicted to have included the object”; Brower, paragraph 0042, “Although the above example is described with respect to a reverse order for the object tracking 134 (e.g., tracking the objects from the image 306 backwards toward the image 302), this is not intended to be limiting. In some examples, the machine learning model(s) 104 may have detected the vehicle 310 and/or the pedestrian 320 at an earlier image (e.g. , the image 302), at a middle image (e.g., the image 304), and/or another image in the sequence of images, and object tracking 134 may be used to track the vehicle 310 and/or the pedestrian 320---e.g., in sequential or semi-sequential (e.g. , every other image, every third image, etc.) order-in a forward and/or a reverse direction. As such, the images, or sequence of images, may be re-ordered in any way that allows that an object tracking algorithm to track an object from an image where the object was detected through other images in the sequence [determining the obstacle trajectory information by combining tracking and matching results of the two rounds].” (bold only) “wherein the step of determining the obstacle trajectory information by combining tracking and matching results of the two rounds comprises: comparing the tracking and matching result of the first round with the tracking and matching result of the second round, and retaining directly the same parts between the tracking and matching result of the first round and the tracking and matching result of the second round in a retained part; storing the different parts between the tracking and matching result of the first round and the tracking and matching result of the second round, and corresponding tracking matching scores of the different parts obtained during the obstacle tracking and matching in a tracking cache together”: Brower, paragraph 0041, “In some examples, the object tracking 134 may be performed from a detected object frame 130 (e.g., using the bounding shape of the detected object) to track the object forwards [the tracking and matching result of the first round] and/or backwards [the tracking and matching result of the second round]-in sequence-to identify a location of the object in one or more of the undetected object frames 132 [and corresponding tracking matching scores of the different parts obtained during the obstacle tracking and matching]. For example, with respect to FIG. 3A, assume that the image 302 is captured first, then the image 304, then the image 306, in sequence (where each of the images 302, 304, and 306 are represented by the second sensor data 122). In such an example, and because the machine learning model(s) 104 may be trained to detect objects at the first perspective of the first sensor data 102, only the image 306 may be included in the detected object frames 130 and the images 302 and 304 may be included in the undetected object frames 132. As such, the detections ( e.g., represented by the bounding shapes 312C and 314C of the vehicle 310C and the pedestrian 320C, respectively) may be used to perform object tracking 134, in reverse order ( e.g., from the image 306, to the image 304, and then to the image 302), to generate new object labels 140 through label generation 136 for the vehicle 310 and the pedestrian 320 in each of the images 302 and 304 (e.g. , as illustrated in FIG. 3B). The object labels generated based on the predictions of the second object detections 124 by the machine learning model (s) 104 may be carried over to the object detected within the undetected object frame(s) 132 predicted to have included the object.” (bold only) “determining the obstacle trajectory information according to the retained part”: Brower, paragraph 0042, “Although the above example is described with respect to a reverse order for the object tracking 134 (e.g., tracking the objects from the image 306 backwards toward the image 302), this is not intended to be limiting. In some examples, the machine learning model(s) 104 may have detected the vehicle 310 and/or the pedestrian 320 at an earlier image (e.g. , the image 302), at a middle image (e.g., the image 304), and/or another image in the sequence of images, and object tracking 134 may be used to track the vehicle 310 and/or the pedestrian 320---e.g., in sequential or semi-sequential (e.g. , every other image, every third image, etc.) order-in a forward and/or a reverse direction. As such, the images, or sequence of images, may be re-ordered in any way that allows that an object tracking algorithm to track an object from an image where the object was detected through other images in the sequence [determining the obstacle trajectory information].” Brower does not explicitly teach: “wherein the sensor data are point cloud data corresponding to a Lidar sensor of an autonomous vehicle, and for any piece of the point cloud data, annotation results manually annotated comprise locations, sizes, orientations and categories of obstacles in the point cloud data” (bold only) “comparing the tracking and matching result of the first round with the tracking and matching result of the second round, and retaining directly the same parts between the tracking and matching result of the first round and the tracking and matching result of the second round in a retained part” (bold only) “storing the different parts between the tracking and matching result of the first round and the tracking and matching result of the second round, and corresponding tracking matching scores of the different parts obtained during the obstacle tracking and matching in a tracking cache together, retaining a part with the highest tracking matching score in the retained part and deleting a part conflicting therewith, and repeating the process until the cache is empty” (bold only) “determining the obstacle trajectory information according to the retained part” “controlling the autonomous vehicle based on the determined obstacle trajectory information” Popov teaches: “wherein the sensor data are point cloud data corresponding to a Lidar sensor of an autonomous vehicle, and for any piece of the point cloud data, annotation results manually annotated comprise locations, sizes, orientations and categories of obstacles in the point cloud data”: Popov, paragraph 0094, “For example, a scene may be observed with RADAR and LIDAR sensors (e.g., RADAR sensor(s) 1360 and LIDAR sensor(s) 1364 of autonomous vehicle 1300 [a Lidar sensor of an autonomous vehicle] of FIGS. 13A-13D) to collect a frame of RADAR data and LIDAR data for a particular time slice”; Popov, paragraph 0095, “More specifically, a LIDAR point cloud [the sensor data are point cloud data corresponding to a Lidar sensor] may be orthographically projected to form a LIDAR projection image (e.g., an overhead image) corresponding to the RADAR projection image contained in the RADAR data tensor ( e.g., having the same dimensionality, perspective, and/or ground sampling distance). The LIDAR projection image may be annotated (e.g., manually [annotation results manually annotated], automatically, etc.) with labels identifying the locations, sizes, orientations, and/or classes of the instances of the relevant objects in the LIDAR projection image [comprise locations, sizes, orientations and categories of obstacles in the point cloud data].” “controlling the autonomous vehicle based on the determined obstacle trajectory information”: Popov, paragraph 0006, “The detected object instances may be provided to an autonomous machine control stack to enable safe planning and control of an autonomous machine [controlling the autonomous vehicle based on the determined obstacle trajectory information].” Popov and Brower are both related to the same field of endeavor (machine learning systems for objection detection). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the use of LIDAR point cloud training data of Popov to the object detection teachings of Brower to arrive at the present invention, in order to benefit from LIDAR’s greater accuracy versus other sensing techniques, as stated in Popov, paragraph 0003-0004, “Conventional perception methods rely heavily on the use of cameras or LIDAR sensors to detect obstacles in a scene. […] Some conventional techniques use RADAR sensors to detect moving, reflective objects. However, many conventional RADAR detection techniques struggle or entirely fail to disambiguate obstacles from background noise in a cluttered environment. Furthermore, while some traditional RADAR detection techniques work well when detecting moving, RADAR-reflective objects, they often struggle or entirely fail to distinguish stationary objects from background noise. Similarly, traditional RADAR detection techniques have a limited accuracy in predicting object classification, dimension, and orientation.” Rusinkiewicz teaches: (bold only) “comparing the tracking and matching result of the first round with the tracking and matching result of the second round, and retaining directly the same parts between the tracking and matching result of the first round and the tracking and matching result of the second round in a retained part”: Rusinkiewicz, section 1, paragraph 1, “ICP starts with two meshes and an initial guess for their relative rigid-body transform, and iteratively refines the transform by repeatedly generating pairs of corresponding points on the meshes and minimizing an error metric [comparing the … result of the first round with the … result of the second round, and retaining directly the same parts between … the first round and … the second round in a retained part].” (bold only) “storing the different parts between the tracking and matching result of the first round and the tracking and matching result of the second round, and corresponding tracking matching scores of the different parts obtained during the obstacle tracking and matching in a tracking cache together, retaining a part with the highest tracking matching score in the retained part and deleting a part conflicting therewith, and repeating the process until the cache is empty”: Rusinkiewicz, section 1, paragraph 1, “ICP starts with two meshes and an initial guess for their relative rigid-body transform, and iteratively [repeating the process] refines the transform by repeatedly generating pairs of corresponding points on the meshes and minimizing an error metric”; Rusinkiewicz, section 3.4, paragraph 1, “Rejection of pairs that are not consistent with neighboring pairs, assuming surfaces move rigidly [Dorai 98]. This scheme classifies two correspondences (p1, q1) and (p2, q2) as inconsistent iff |Dist (p1, p2) - Dist (q1, q2)| is greater than some threshold. Following [Dorai 98], we use 0.1 * max(Dist (p1, p2), Dist (q1, q2)) as the threshold. The algorithm then rejects those correspondences that are inconsistent with most others [storing the different parts between … the first round and … the second round, … in a tracking cache together, retaining a part with the highest tracking matching score in the retained part and deleting a part conflicting therewith, tracking cache interpreted as the collection of points left to be considered for matching or rejection]. Note that the algorithm as originally presented has running time O(n2) at each iteration of ICP. In order to reduce running time, we have chosen to only compare each correspondence to 10 others, and reject it if it is incompatible with more than 5”; Rusinkiewicz, section 1, paragraph 3, “In this paper, we first present the methodology used for comparing ICP variants, and introduce a number of test scenes used throughout the paper. Next, we summarize several ICP variants in each of the above six categories, and compare their convergence performance [repeating the process until the cache is empty, interpreted as continuing until all points have been matched or rejected]. As part of the comparison, we introduce the concept of normal-space-directed sampling, and show that it improves convergence in scenes involving sparse, small-scale surface features. Finally, we examine a combination of variants optimized for high speed.” (bold only) “determining the obstacle trajectory information according to the retained part”: Rusinkiewicz, section 3.4, paragraph 1, “Rejection of pairs that are not consistent with neighboring pairs, assuming surfaces move rigidly [Dorai 98]. This scheme classifies two correspondences (p1, q1) and (p2, q2) as inconsistent iff |Dist (p1, p2) - Dist (q1, q2)| is greater than some threshold. Following [Dorai 98], we use 0.1 * max(Dist (p1, p2), Dist (q1, q2)) as the threshold. The algorithm then rejects those correspondences that are inconsistent with most others [according to the retained part]. Note that the algorithm as originally presented has running time O(n2) at each iteration of ICP. In order to reduce running time, we have chosen to only compare each correspondence to 10 others, and reject it if it is incompatible with more than 5.” Rusinkiewicz and Brower are analogous arts as they are both related to FIELD. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the point pair selection and rejection of Rusinkiewicz with the forward and backward tracking data sets of Brower to arrive at the present invention, in order to efficiently find matching pairs of points in the two data sets, as stated in Rusinkiewicz, section 1, paragraph 4, “In this paper, we first present the methodology used for comparing ICP variants, and introduce a number of test scenes used throughout the paper. Next, we summarize several ICP variants in each of the above six categories, and compare their convergence performance. As part of the comparison, we introduce the concept of normal-space-directed sampling, and show that it improves convergence in scenes involving sparse, small-scale surface features. Finally, we examine a combination of variants optimized for high speed.” Regarding claim 2 and analogous claims 7 and 12: Brower as modified by Popov and Rusinkiewicz teaches the method according to claim 1. Brower further teaches: “wherein M detection models are provided, M being a positive integer greater than one“: Brower, paragraph 0005, “In contrast to conventional systems, such as those described above, the current system may use outputs from one or more existing machine learning models to generate additional ground truth data to train or retrain the machine learning model or another machine learning model to detect objects from multiple perspectives.” “and the step of performing obstacle detection on to-be-annotated sensor data by using the detection model comprises: performing model integration on the M detection models, and performing obstacle detection on the to-be-annotated sensor data by using an integrated model”: Brower, paragraph 0032, “The machine learning model(s) 104 may use the first sensor data 102 to compute the first object detections 106. Although examples are described herein with respect to using deep neural networks (DNNs), and specifically convolutional neural networks (CNNs), as the machine learning model(s) 104 (e.g., with respect to FIGS. 1B-1C), this is not intended to be limiting. For example, and without limitation, the machine learning model(s) 104 may include any type of machine learning model, such as a machine learning model (s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naive Bayes, k-nearest neighbor (Knn), K means clustering, random forest [performing obstacle detection on the to-be-annotated sensor data by using an integrated model], dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, long/short term memory/LSTM, Hopfield, Boltzmann, deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), object detection algorithms, computer vision algorithms, and/or other types of machine learning models.” Claim 5 and analogous claims 10 and 15 rejected under 35 U.S.C. 103 over Brower as modified by Popov and Rusinkiewicz in view of Shibata et al., US Pre-Grant Publication No. 2023/0325983 (hereafter Shibata). Brower as modified by Popov and Rusinkiewicz teaches the method according to claim 1. Brower further teaches (bold only) “wherein the step of modifying the detection results according to the obstacle trajectory information, and taking modified detection results as required annotation results comprises: performing noise identification on the obstacle trajectory information by using a pre-trained noise identification model, and taking the detection result corresponding to the obstacle trajectory information identified as non-noise as the annotation result”: Brower, paragraph 0075, “The method 600, at block B608, includes determining, using an object tracking algorithm and based on the bounding label, locations of the object in additional images of the sequence of images. For example, object tracking 134 may be used to track locations of the object [obstacle trajectory information] in some of the undetected object frame(s) 132 based on the second object detections 124 in the detected object frame(s) 130”; Brower, paragraph 0076, “The method 600, at block Bl0, includes based on the location of the object, generating a second set of ground truth data including bounding labels associated with the object for the additional images of the sequence of images [taking modified detection results as required annotation results]. For example, label generation 136 may use locations of the object in undetected object frame(s) 132 to generate a ground truth data 138 including bounding labels (e.g., bounding labels for vehicle 310A, 310B and pedestrian 320A, 320B) associated with the object for some of the undetected object frame(s) 132 where the object is detected.” Brower does not explicitly teach (bold only) “wherein the step of modifying the detection results according to the obstacle trajectory information, and taking modified detection results as required annotation results comprises: performing noise identification on the obstacle trajectory information by using a pre-trained noise identification model, and taking the detection result corresponding to the obstacle trajectory information identified as non-noise as the annotation result.” Shibata teaches (bold only) “wherein the step of modifying the detection results according to the obstacle trajectory information, and taking modified detection results as required annotation results comprises: performing noise identification on the obstacle trajectory information by using a pre-trained noise identification model, and taking the detection result corresponding to the obstacle trajectory information identified as non-noise as the annotation result”: Shibata, paragraphs 0213-0214, “The first machine learning model 301 is a machine learning model that receives sensor data as an input and outputs information indicating whether or not noise occurs in the sensor data [a pre-trained noise identification model]. The first replacement-function machine learning model 3021 is a machine learning model that receives sensor data in which noise occurs as an input and outputs sensor data in which a noise portion of the sensor data in which noise occurs has been replaced with sensor data in which no noise occurs [taking the detection result corresponding to the obstacle trajectory information identified as non-noise as the annotation result].” Shibata and Brower are both related to the same field of endeavor (object detection through sensor processing). Brower teaches detecting and tracking objects from sensor data. Shibata teaches the use of a model to remove noisy sensor data. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the noise filtering of Shibata to the teachings of Brower to arrive at the present invention, in order to remove noise for better downstream processing, as stated in Shibata, paragraph 0002, “In processing based on sensor data acquired from a sensor, the acquired sensor data is desirably reliable in order to appropriately perform the processing. For example, if noise occurs in the acquired sensor data, the sensor data is sensor data with low reliability, and there is a possibility that the processing is not appropriately performed,” and 0007, “The present disclosure has been made to solve the problem described above, and an object of the present disclosure is to provide a sensor noise removal device capable of converting sensor data whose reliability is lowered by noise into sensor data in a state where no noise occurs.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Harendt, US Pre-Grant Publication No. 2020/0217650, discloses a method for tracking an object in three-dimensional space using two sequences of images captured by sensors in different locations. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT SPRAUL whose telephone number is (703) 756-1511. The examiner can normally be reached M-F 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MICHAEL HUNTLEY can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAS/Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Apr 18, 2022
Application Filed
Jan 14, 2025
Non-Final Rejection — §103
Mar 25, 2025
Response Filed
Apr 21, 2025
Final Rejection — §103
May 29, 2025
Request for Continued Examination
Jun 02, 2025
Response after Non-Final Action
Sep 15, 2025
Non-Final Rejection — §103
Nov 18, 2025
Response Filed
Dec 09, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591634
COMPOSITE EMBEDDING SYSTEMS AND METHODS FOR MULTI-LEVEL GRANULARITY SIMILARITY RELEVANCE SCORING
2y 5m to grant Granted Mar 31, 2026
Patent 12591796
INTELLIGENT DISTANCE PROMPTING
2y 5m to grant Granted Mar 31, 2026
Patent 12572620
RELIABLE INFERENCE OF A MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 10, 2026
Patent 12566974
Method, System, and Computer Program Product for Knowledge Graph Based Embedding, Explainability, and/or Multi-Task Learning
2y 5m to grant Granted Mar 03, 2026
Patent 12547616
SEMANTIC REASONING FOR TABULAR QUESTION ANSWERING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
59%
Grant Probability
94%
With Interview (+34.7%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month