Prosecution Insights
Last updated: April 19, 2026
Application No. 18/761,457

OBJECT IDENTIFICATION

Non-Final OA §101§102§112
Filed
Jul 02, 2024
Examiner
SAJOUS, WESNER
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Ford Global Technologies LLC
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
1099 granted / 1196 resolved
+29.9% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
29 currently pending
Career history
1225
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
33.5%
-6.5% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1196 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . It is responsive to the submission dated 07/02/2024. Claims 1-20 are presented for examination, of which, claims 1 and 11 are independent claims. Information Disclosure Statement 2. The information disclosure statements (IDSs) submitted on 07/02/2024 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner. Claim Rejections - 35 USC § 101 3. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under step 1, it is determined whether the claims are directed to a statutory category of invention (see MPEP 2106.03(II)). In the instant case, claims 1-10 are directed to a method and thus fall within a statutory category. While the claim fall within statutory category, under revised Step 2A, Prong 1 of the eligibility analysis (MPEP 2106.04), the claimed invention recites an abstract idea of organizing human activity and/or the grouping of “Mental Processes”. Specifically, claim 1 recites the abstract ideas of: A method, comprising: upon obtaining a time series of point clouds, inserting point cloud data associated with an object in the respective point clouds; translating, in the respective point clouds, the point cloud data such that respective ranges in the point cloud data are increased based on a range threshold; and based on the translated point cloud data, training a machine learning program to identify the object at or beyond the range threshold”. Thus, claim 1, under its broadest reasonable interpretation (BRI), covers performance of the limitations in the mind, but for the recitation of use of training a machine learning program. For example, under the BRI, the steps in claim 1 could be interpreted as: a person, in the mind, makes an evaluation of a time frame necessary to arrange dots, in a sequential manner, to reflect the grid of an existing image; the person then (using a pen and paper) follows a path similar to what’s shown in the grid of the existing image, to tracing lines in between the dots to create an object image; and the person then makes an evaluation as to whether the image created is exactly of the same size or larger than the existing image. Under revised Step 2A, Prong 1 of the eligibility analysis, it is necessary to evaluate whether the claim recites a judicial exception by referring to subject matter grouping articulated in 2106.04(a) of the MPEP. Even in consideration of the analysis, the claim recites an abstract idea. Representative claim 1 recites the abstract idea of performing human activities in the mind together with using known mathematical relationships using a pen and paper, as noted above. This concept is considered to be a method of organizing human activity via the grouping of mental processes. Under revised Step 2A, Prong 2 of the eligibility analysis, if it is determined that the claims recite a judicial exception, it is then necessary to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of that exception. In this case, representative claim 1 includes the additional element of training a machine learning program to identify an object. The use of a machine learning program individually does not integrate the exception into the practical application because it is recited at high-level of generality such that it is merely being used to apply the abstract idea using a generic computer, as defined in the MPEP 2106.05(f). The use of a machine learning program only presents the idea of solution (i.e., translating the data points into an image for further visual evaluation of the connecting points within the image) while failing to describe how the machine learning program is structured or trained to achieve the solution of identifying whether the object is at or beyond the range threshold. This appears to be mere tangential addition to the abstract idea(s) and amount to extra-solution activity concerning mere data gathering and evaluation. The addition of an insignificant extra-solution activity limitation does not impose meaningful limits on the claim such that is it not nominally or tangentially related to the invention. Accordingly, the additional element of training a machine learning program does not integrate the abstract idea into a practical application of the invention. Under Step 2B of the eligibility analysis, if it is determined that the claims recite a judicial exception that is not integrated into a practical application of that exception, it is then necessary to evaluate the additional elements individually and in combination to determine whether they provide an invention concept (i.e., whether the additional elements amount to significantly more than the exception itself), as discussed in MPEP 2106.05(f). The judicial exception is not integrated into a practical application, because just citing the use of a machine learning program to provide an output of the recited mental process, without describing how the machine learning program is trained to achieve the stated solution, is merely an incidental or a token addition to the claim that does not alter or affect how the process steps or functions in the abstract idea(s) are performed. Therefore, the claimed additional element does not add meaningful limitations to the indicated claim beyond a general linking to a technological environment. See: MPEP 2106.05(h). Training a machine learning program to identify an object within an image, in the field of artificial intelligence and computer vision, is well-known, routine, and conventional such as to not qualify as an inventive concept. As such, claim 1 is not patent eligible under 35 USC 101. Dependent Claims 2, 4-10 further narrow the claimed abstract ideas by defining how the data gathering and human activities are performed using additional mathematical relationships for implementing the machine learning program for modifying the identified object. However, no details of using any specialized machine are provided describing how the machine learning program is structured or trained to achieve the desired solution. As such, the claim does not integrate the abstract idea into a practical application of the invention. Dependent Claim 3 further narrow the claimed abstract ideas by adding the use of a sensor to obtain data points to the otherwise mental processes of organizing human activities. However, the use of a sensor amounts to mere instructions to implement the abstract idea, as it does not qualify as an inventive concept in the art. The use of the sensor individually does not integrate the exception into the practical application because it is merely being used to apply the abstract idea using a generic computer, as defined in the MPEP 2106.04(d). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept; and thus, does not impose any meaningful limits on practicing the abstract idea. As such, claim 1 is not patent eligible under 35 USC 101. Claim Rejections - 35 USC § 112 5. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 6. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The method of claim 1 renders the claim indefinite because it provides no concrete functional or structural features explaining how the processes of said method are being performed. Claim 1, for example, recite: “A method comprising: upon obtaining a time series of point clouds, inserting point cloud data associated with an object in the respective point clouds; translating, in the respective point clouds, the point cloud data such that respective ranges in the point cloud data are increased based on a range threshold; and based on the translated point cloud data, training a machine learning program to identify the object at or beyond the range threshold”. The above-stated steps, as claimed, appear to provide a concatenation of a block box experiment, of which only inputs and outputs are specified. The claimed steps merely define obscure parameters (e.g., inserting point cloud data in respective point clouds or increasing respective ranges in the point cloud data) by virtue of vague relation to other unclear and undefined other parameters (e.g., obtaining a time series of point clouds), as no details are provided describing how the parameters are determined to achieve the desired result. For instances, in the first limitation, it is unclear how the time series are obtained, and how the point cloud data are being inserted in the respective point clouds. Likewise, in the second limitation, it is unclear as to how the translating step is being performed. The phrase “time series of point clouds” is vague because the details of what constitute the point clouds (or size thereof) within an object are not made known in the claim. The term “time series” is relative term because it depends on context such as, the number of the data points being classified, the trend of the classification, and as to whether the number of variables including the data points are being classified discretely or continuously. The original disclosure, at paras. 13-37 and 54-57, merely describe the generation of a time series upon the use of a lidar sensor to collected point clouds data. However, details of how the volume forming the data points associated with the object are being measured are not made clear in the disclosure. Further, the limitation reciting “an object in the respective point clouds” renders the claim indefinite as it lacks sufficient antecedent basis for “the respective point clouds”. It is also unclear as to how an object can be within a point cloud when said point cloud may be construed as a mere representation of point or dot in 3D space. Paragraphs 62-64 of the original disclosure also describes using the point clouds to trace the trajectories of the object. Such description is different from identifying an object within the point cloud. As such, the limitations fail to limit the claim. Lastly, while the last limitation recites “training a machine learning program to identify the object …” the limitation fails to provide how the machine learning program is trained to achieve the stated solution. It also fails to provide a tangible reason for identifying the object. Thus, since the claimed limitations lack the details for which protection is sought in regard to a technical effect to be achieved by the steps of said method claim, the ordinary skill in the art would not be able to draw a clear boundary between what is and is not covered by the claim. The applicant, in response to this office action, is suggested to amend the claim such that it expressly recites the corresponding structure, or material for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter(s). As per claim 4, the limitation reciting “determining a trajectory of the object based on concatenating a three-dimensional (3D) bounding box associated with the translated point cloud data in the respective point clouds together” renders the claim indefinite, because it is unclear as to what is being determined together with the trajectory of the object. The subject-matters of claim 11 render the claims indefinite because the limitations that include the phrase “time series of point clouds” is vague because the details of what constitute the point clouds (or size thereof) within an object are not made known in the claim. The term “time series” is relative term because it depends on context such as, the number of the data points being classified, the trend of the classification, and as to whether the number of variables including the data points are being classified discretely or continuously. The original disclosure, at paras. 13-37 and 54-57, merely describe the generation of a time series upon the use of a lidar sensor to collected point clouds data. However, details of how the volume forming the data points associated with the object are being measured are not made clear in the disclosure. Further, the limitation reciting “an object in the respective point clouds” renders the claim indefinite as it lacks sufficient antecedent basis for “the respective point clouds”. It is also unclear as to how an object can be within a point cloud when said point cloud may be construed as a mere representation of point or dot in 3D space. Paragraphs 62-64 of the original disclosure also describes using the point clouds to trace the trajectories of the object. Such description is different from identifying an object within the point cloud. Thus, the metes and bound of the claims are unclear due to lack of clarification of what are being performed besides the object identifying step. As such, the reader is left in doubt as to the meaning of the technical features to which the limitations of said claims refer, thereby rendering the definition of the subject-matter of said claims unclear. Claim 14 is indefinite for reason similar to claim 4. The claims not specifically cited in this rejection are rejected as being dependent upon their rejected base claims. Claim Rejections - 35 USC § 102 7. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 8. Claims 1, 3, 8-11, 13 and 18-20 are rejected under 35 U.S.C. 102(a)(a1) as being anticipated by the article to Zhu et al., entitled “Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection”. Considering claim 1, Zhu discloses a method comprising: upon obtaining a time series of point clouds, inserting point cloud data associated with an object in the respective point clouds (for example, Zhu discloses using a Lidar Track to implement the nuScenes dataset to provide point cloud sweeps in 3D coordinates format, where each point cloud data is associated with a time-stamp and the point cloud dataset is used for classifying an object. The nuScenes baseline is followed by accumulating 10 Lidia sweeps to form dense point cloud inputs in 3D format. See page 2, section 2.1 in view of the “Introduction” section of Zhu); translating, in the respective point clouds, the point cloud data such that respective ranges in the point cloud data are increased based on a range threshold (for example, Zhu discloses the nuScenes baseline is followed by accumulating 10 Lidia sweeps to form dense point cloud inputs. Specifically, our input is of (x, y, z, intensity, ∆t) format. ∆t is the time lag between each non-keyframe sweep regarding keyframe sweep, and ∆t ranges from 0s to 0.45s. We use grid size 0.1m, 0.1m, 0.2m in x, y, z axis respectively to convert the raw point cloud into voxel presentation. In each voxel, we take mean of all points in the same voxel to get final inputs to the network. See page 2, section 2.1 of Zhu. See also para. 1 at page 3 that teaches to improve class imbalance in the final training dataset, distribution of samples is made 4.5 times larger the original sampling dataset and multiple objects of different categories can appear in one point cloud sample); and based on the translated point cloud data, training a machine learning program to identify the object at or beyond the range threshold (for examples, Zhu teaches that 3D object detection task can be performed by converting point cloud into bird-view format and applying 2D CNN or by implementing a multi-tasked learning technique to get a 3D object detection result. See the “Introduction” section at page 1 of Zhu. Zhu further teaches: we propose DS Sampling, …we firstly duplicate samples of a category according to its fraction of all samples. The fewer a category’s samples are, more samples of this category are duplicated to form the final training dataset. More specifically, we first count total point cloud sample number that exists a specific category in the training split, then samples of all categories which are summed up to 128106 samples. See section 2.1 at page 3. Also, sections 2.2 and 2.3 and figure of Zhu teach using different network architecture, including a 3D feature extractor network and/or a convolution neural network to force the network(s) to learn inter-class difference of common point cloud data to identify an object as the dominant object based on the object’s level of annotations of the whole dataset. Sections 2.4-3.2 of Zhu also teaches that different training procedures can be used to predict the object detection result, based on a predefined threshold set according to the number or range of layers or data points respectively sparse for each bounding-box). As per claim 3, Zhu discloses upon obtaining point cloud data via a sensor, inputting the point cloud data into the trained machine learning program (e.g. performing multi-task training for the nuScenes 3D object detection technique where only a lidar input is allowed when Lidar Rack is the first track. See abstract and section 1, page 2 of Zhu); and operating a vehicle based on output from the trained machine learning program (e.g., applying the trained multimodal dataset for autonomous driving; see table 1 and sections 2.3-2.4 of Zhu). As per claim 8, Zhu discloses the point cloud data associated with the object is ground truth data. See section 3 of Zhu. As per claim 9, Zhu discloses updating annotations corresponding to the point cloud data based on the increased respective ranges (e.g. conducting a data augmentation scheme during the training procedure by considering a maximum number of allowed point cloud data that is within a range of A x B x C meters in respective X, Y, Z coordinates. See section 3). As per claim 10, Zhu discloses determining a number of times to insert the point cloud data associated with the object based on a distribution of the object in a training dataset (e.g., Zhu teaches to improve class imbalance in the final training dataset, distribution of samples is made 4.5 times larger the original sampling dataset after counting the total point cloud data number that exists in a specific category. See section 2.1 at page 3 of Zhu). The subject-matter of independent claim 11 corresponds in terms of a computer system to that of independent method claim 1, and the rationale raised above to reject the later also apply, mutatis mutandis, to the former Claim 13 is rejected under the same rationale as claim 3. Claim 18 is rejected under the same rationale as claim 9. Claim 19 is rejected under the same rationale as claim 9. Claim 20 is rejected under the same rationale as claim 10. Allowable Subject Matter 9. Claims 2, 4-7, 12, and 14-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, after correcting the indefiniteness issues raised above, because the prior art of record does not appear to teach the method and system of claims 1, and 11, wherein for the respective point clouds: identifying a second object at the range threshold based on point cloud data associated with the second object; and upon translating the point cloud data associated with the object, adjusting a density of the point cloud data associated with the object based on a density of the point cloud data associated with the second object (as recited in claims 2 and 12), and further comprising determining a trajectory of the object based on concatenating a three-dimensional (3D) bounding box associated with the translated point cloud data in the respective point clouds together (as recited in claims 4 and 14). Conclusion 10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Heins et al. (US 20210209808) discloses Systems and methods for compressing dynamic unstructured point clouds. A dynamic unstructured point cloud can be mapped to a skeletal system of a subject to form one or more structured point cloud representations. One or more sequences of the structured point cloud representations can be formed. The one or more sequences of structured point cloud representations can then be compressed. 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESNER SAJOUS whose telephone number is (571)272-7791. The examiner can normally be reached on M-F 9:30 TO 6:30. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Broome Said can be reached on 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WESNER SAJOUS/Primary Examiner, Art Unit 2612 WS 02/19/2026
Read full office action

Prosecution Timeline

Jul 02, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597177
Changing Display Rendering Modes based on Multiple Regions
2y 5m to grant Granted Apr 07, 2026
Patent 12597185
METHOD, APPARATUS, AND DEVICE FOR PROCESSING IMAGE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597203
SIMULATED CONSISTENCY CHECK FOR POINTS OF INTEREST ON THREE-DIMENSIONAL MAPS
2y 5m to grant Granted Apr 07, 2026
Patent 12589303
Computer-Implemented Methods for Generating Level of Detail Assets for Dynamic Rendering During a Videogame Session
2y 5m to grant Granted Mar 31, 2026
Patent 12592038
EDITABLE SEMANTIC MAP WITH VIRTUAL CAMERA FOR MOBILE ROBOT LEARNING
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+7.6%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1196 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month