Prosecution Insights
Last updated: April 19, 2026
Application No. 18/456,926

GEOPOSITION DETERMINATION AND EVALUATION OF SAME USING A 2-D OBJECT DETECTION SENSOR CALIBRATED WITH A SPATIAL MODEL

Non-Final OA §102§112
Filed
Aug 28, 2023
Examiner
CAMMARATA, MICHAEL ROBERT
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Anno.ai, Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
213 granted / 305 resolved
+7.8% vs TC avg
Strong +36% interview lift
Without
With
+35.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
46 currently pending
Career history
351
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
24.6%
-15.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 305 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 8 recites a “single-snap stereo fusion device” but this term is not understood. To what does “single-snap” refer? What is being fused? Note that a search for this term does not reveal any hits. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Watanabe (WO 2023189691 A1). Claim 1 In regards to claim 1, Watanabe discloses a non-transitory computer-readable medium storing one or more instructions that, when executed by one or more processors, are configured to cause the one or more processors to perform operations {see [0055]-[0058], [0097], fig. 26, [0153]-[0162] discussing various computer implementations including the recited, standard computer elements} comprising: detecting an object in an image scene data to create a detected object, the image scene data representing an image scene in a field of view of at least one object detection sensor {see person identification sub-system 104 which includes a camera (object detection sensor) and object/person detector for recognizing/detecting the object/person in image data to create a detected object including a person/object ID, Fig. 1, [0039], [0059]-[0061], [0098], [0102]}; determining a current geoposition data of the detected object using an image-position model, the image-position model determined by calibrating the at least one object detection sensor with a three-dimensional mapping provided by a spatial sensor {see object tracking sub-system 106 that includes, e.g., a LiDAR (spatial sensor) to determine the current 3D location data (geoposition data) of all objects within the field of view including the object/person preliminarily detected/identified by the person identification subsystem 104, [0040], [0059]-[0061]; Further as to the image position model see [0044] in which the sensors may be pre-configured (calibrated) with a reference line/plane and/or reference position/location in the space may be calculated based on known or pre-configured (calibrated) height of the object and/or focal length; [0100], [0103], [0114]}; determining an identity of the detected object to create a preliminary identity {see above cites including [0059] for the person identification sub-system 104 creating an object ID that serves as a “preliminary identify” until the reliability thereof may be confirmed by various methods such as ground distance and/or speed as discussed below}; identifying a data store identity of a plurality of data store identities that matches the preliminary identity of the detected object, the plurality of data store identities corresponding to identities of a plurality of detected objects {see above cites including [0097]-[0099], [0102] discussing a database 610 that stores various appearance data (identities) that may be used to identify a person or object}; and recording the current geoposition data of the detected object with a data store record associated with the data store identity that matches the preliminary identity of the detected object if a physics model threshold of the detected object is satisfied {See Fig. 5, [0094]-[0095] that determines if the detected locations are within the ground distance, [0108] compares sizes and moving speeds of detected objects, tracked object’s height or size, [0118]-[0126]; and determining if the moving speed of the object is lower than a human speed limit in [0109], [0132] to confirm reliability and assign the same object ID as per [0041], [0059], Supplementary Notes 10, 22, 23. It is noted that each of these techniques utilizes a physics model threshold, particularly the human speed limit threshold which determines if the preliminary identify of the detected object is reliable enough to confirm that the detected object’s identity and use to determine the object’s pathway. See also Figs. 7, 9, 10, 12, 13, 14 and associated disclosures}. Claim 2 In regards to claim 2, Watanabe discloses wherein when the current geoposition data is recorded in the data store record with a time associated with the determination of geoposition data, wherein the data store record includes a plurality of geoposition data, each geoposition data of the plurality of geoposition data having a time associated with a determination of each of the geoposition data {See Figs. 1-3, 17, 19A illustrating this concept of recording position with time. See also [0045]-[0048] further discussing recording location and timestamp data store records}. Claim 3 In regards to claim 3, Watanabe discloses wherein the physics model threshold is satisfied based on an evaluation of the current geoposition data with a geoposition data associated with a geoposition data in the geoposition data set that is closest in time to the current geoposition data {See Fig. 5, [0094]-[0095] that determines if the detected locations are within the ground distance, [0108] compares sizes and moving speeds of detected objects, tracked object’s height or size, [0118]-[0126]; and determining if the moving speeds of the objects is lower than a human speed limit in [0109], [0132], Supplementary Notes 10, 22, 23. It is noted that each of these techniques utilizes a physics model threshold, particularly the human speed limit threshold which determines if the preliminary identify of the detected object is reliable enough to confirm that the detected object’s identity and use to determine the object’s pathway. See also Figs. 7, 9, 10, 12, 13, 14 and associated disclosures. Moreover, these determinations are based on evaluations of the current geoposition data with a geoposition data associated with a geoposition data in the geoposition data set that is closest in time to the current geoposition data}. Claim 4 In regards to claim 4, Watanabe discloses wherein the physics model threshold is based on a speed of the object determined based upon a distance over which the object moves and a time difference over which the distance is measured {See [0108] comparing sizes and moving speeds of detected objects, tracked object’s height or size, [0118]-[0126]; and determining if the moving speeds of the objects is lower than a human speed limit in [0109], [0132], Supplementary Notes 10, 22, 23. It is noted that each of these techniques utilizes a physics model threshold, particularly the human speed limit threshold which determines if the preliminary identify of the detected object is reliable enough to confirm that the detected object’s identity and use to determine the object’s pathway. Moreover, the speed is also based on the fundamental definition thereof (determined based upon a distance over which the object moves and a time difference over which the distance is measured).}. Claim 5 In regards to claim 5, Watanabe discloses wherein the physics model threshold is at least one of a height of the object, a weight of the object, a trajectory of the object, a speed of the object, or a momentum of the object {see above cites for claim 1 which include at least height, trajectory (pathway), and/or speed of the object. Moreover, body size and body ratio are also detected in [0098] thus at least suggesting weight and momentum. Note also that this claim only requires one of the listed options}. Claim 6 In regards to claim 6, Watanabe discloses wherein the geoposition data is determined based upon a first object detection sensor of the at least one object detection sensor {see person identification sub-system 104 which includes a camera (object detection sensor) and object/person detector for recognizing/detecting the object/person in image data to create a detected object including a person/object ID and geoposition data, Fig. 1, [0039], [0043]-[0044], [0059]-[0061], [0098], [0102] wherein [0100] further clarifies that location data may also be derived from image capturing device 602. Further as to plural/second object detection sensors see also Fig. 4, sensors 442A…442N, [0068]-[0075], [0093]-[0094]}; and wherein the data store record includes geoposition data determined based upon a second object detection sensor. {see object tracking sub-system 106 that includes, e.g., a LiDAR (spatial sensor) to determine the current 3D location data (geoposition data) of all objects within the field of view including the object/person preliminarily detected/identified by the person identification subsystem 104, [0040], [0059]-[0061]. Further as to plural/second object detection sensors see also Fig. 4, sensors 442A…442N, [0068]-[0075], [0093]-[0094]} Claim 7 In regards to claim 7, Watanabe discloses which further includes recording a traveler detection label with the data store record, the traveler detection label indicative of whether the physics model threshold of the detected object is satisfied {see the same object ID in [0041], [0059] and object/person detection reliability determinations that record a traveler detection label (e.g. same object ID) indicative of when the physics model threshold is satisfied. See also joining of the pathways in, for example, [0109] which joints pathways when the moving speed of the object is lower than a human moving speed limit such that the joined pathways indicates or otherwise records a “traveler detection label” indicative of whether the physics model threshold (speed less than limit) of the detected object is satisfied}. Claim 8 In regards to claim 8, Watanabe discloses wherein the spatial sensor is at least one of a stereo camera, a Sonar device, a RADAR device, an RF device, a LiDAR device, a device operating using photogrammetry, and a single-snap stereo fusion device {see LiDAR sensor [0004], [0040], [0099], [0140]}. Claims 9 and 10 In regards to claim 9, Watanabe discloses (claim 9) which further includes determining a physical property data of the detected object based on the image scene data and the image-position model and (claim 10) wherein the physical property data including a plurality of physical property characteristics of the object including at least one of weight, height, trajectory, speed, or momentum {see above cites for claim 1 which include at least height, trajectory (pathway), and/or speed of the object. Moreover, body size and body ratio are also detected in [0098] thus at least suggesting weight and momentum. Note also that claim 10 only requires one of the listed options. Further as to the image position model see [0044] in which the sensors may be pre-configured (calibrated) with a reference line/plane and/or reference position/location in the space may be calculated based on known or pre-configured (calibrated) height of the object and/or focal length; [0100], [0103], [0114]}. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Pelletier US 20220004768 A1 discloses tracking objects over time and employs a physics model in the form of maximum speed test for the objects to determine whether it is impossible for a person of interest to be present in the tacklet. See fig 1 copied below. PNG media_image1.png 744 594 media_image1.png Greyscale Min US 20230394686 A1 discloses object tracking in which a spatial locus of uncertainty can for example be dependent upon the detection and/or identification of the object. For example, a certain class of objects could have a maximum speed Vd_max, then the locus of uncertainty would be Vd_max multiplied by the time interval. For example, a particular identified object could have a maximum speed Vi_max, then the locus of uncertainty would be Vi_max multiplied by the time interval as per [0115]-[0119]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael R Cammarata whose telephone number is (571)272-0113. The examiner can normally be reached M-Th 7am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL ROBERT CAMMARATA/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Nov 19, 2025
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602797
RECONSTRUCTION OF BODY MOTION USING A CAMERA SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12586171
METHODS AND SYSTEMS FOR GRADING DEVICES
2y 5m to grant Granted Mar 24, 2026
Patent 12579597
Point Group Data Synthesis Apparatus, Non-Transitory Computer-Readable Medium Having Recorded Thereon Point Group Data Synthesis Program, Point Group Data Synthesis Method, and Point Group Data Synthesis System
2y 5m to grant Granted Mar 17, 2026
Patent 12579835
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM FOR DISTINGUISHING OBJECT AND SHADOW THEREOF IN IMAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12567283
FACIAL RECOGNITION DATABASE USING FACE CLUSTERING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+35.9%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 305 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month