Prosecution Insights
Last updated: April 19, 2026
Application No. 18/770,422

DETERMINING AND MAPPING LOCATION-BASED INFORMATION FOR A VEHICLE

Non-Final OA §102§103§112§DP
Filed
Jul 11, 2024
Examiner
BOYLAN, JAMES T
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Lyft Inc.
OA Round
1 (Non-Final)
63%
Grant Probability
Moderate
1-2
OA Rounds
2y 9m
To Grant
74%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
305 granted / 487 resolved
+4.6% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
34 currently pending
Career history
521
Total Applications
across all art units

Statute-Specific Performance

§101
1.8%
-38.2% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 487 resolved cases

Office Action

§102 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) was submitted on 10/10/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 Claim 9 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 9 states “wherein the motion vectors are aggregated to improve prediction accuracy of the predicted further positions of the dynamic objects relative to the candidate interaction points”. The examiner could only locate support for “motion vectors can be used to identify or confirm the presence of one or more doors that can be used to access a given physical structure. For example, motion vectors corresponding to pedestrians, when aggregated, may identify a door to a building that is being used to enter and exit the building.” In other words, the pedestrian motion vectors are aggregated to identify an entry/exit point of a building. Where is there support for “wherein the motion vectors are aggregated to improve prediction accuracy of the predicted further positions of the dynamic objects relative to the candidate interaction points”? Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 11, 10 and 16 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-5, 8, 11, 13-15, 16 and 18-19 of U.S. Patent No. 10,621,452 in view of Iagnemma et al. (herein after will be referred to as Iagnemma) (US 20180113455). U.S. Patent ‘452 claims the majority of the independent claim limitations except for “real-time sensor data”. However, Iagnemma does disclose real-time data for an AV (Iagnemma para. 0003 and/or 0010). It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by U.S. Patent ‘452 to add the teachings of Iagnemma, in order to utilize real-time data for an AV since the AV needs to make decisions in real-time. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,067,787 in view of Iagnemma (US 20180113455). U.S. Patent ‘787 claims the majority of the independent claim limitations except for “real-time sensor data”. However, Iagnemma does disclose real-time data for an AV (Iagnemma para. 0003 and/or 0010). It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by U.S. Patent ‘787 to add the teachings of Iagnemma, in order to utilize real-time data for an AV since the AV needs to make decisions in real-time. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 6, 10-12 and 16-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Iagnemma (US 20180113455). Regarding claim 1, Iagnemma discloses a computer-implemented method, comprising: receiving real-time sensor data from one or more sensors associated with a vehicle, the real-time sensor data describing a physical environment surrounding the vehicle and including static and dynamic objects; [See Iagnemma [0005] sensors to measure properties of the vehicles surroundings. Also, see 0003 and/or 0010, real time data for an AV. Also, see 0121, objects such as traffic cones/road signs and a moving pedestrian.] identifying candidate interaction points in the physical environment for the vehicle to stop based at least in part on historical interaction points; and [See Iagnemma [0007-0008] Historical environment information and/or 0021, stored data is maintained indicative of potential stopping places.] filtering the candidate interaction points to exclude interaction points that are obstructed by the static and dynamic objects within a selected period of time. [See Iagnemma [0121] Determining when an area for stopping is valid when the stopping area is occupied by other vehicles, objects such as a fallen tree or construction debris, and a moving object such as a pedestrian. Also, see 0021, stored data is maintained indicative of potential stopping places and 0021, current signals are received that represent perceptions of actual conditions at one or more of the potential stopping places. Also, see 0010, AV generates control actions based on both real-time sensor data and prior information.] Regarding claim 2, Iagnemma discloses the method of claim 1. Furthermore, Iagnemma discloses further comprising: accessing historical map data for a plurality of geographic areas, wherein the historical map data describes known physical structures and the historical interaction points. [See Iagnemma [0107] Google Earth 3D model of the environment. Also, see 0021, stored data is maintained indicative of potential stopping places.] Regarding claim 6, Iagnemma discloses the method of claim 1. Furthermore, Iagnemma discloses further comprising: updating a three-dimensional interaction point map based on the filtered candidate interaction points, and [See Iagnemma [0107] 3D model of the local environment. Also, see 0021, The stored data is updated based on changes in the perceptions of actual conditions. Also, see 0021, stored data is maintained indicative of potential stopping places and 0021, current signals are received that represent perceptions of actual conditions at one or more of the potential stopping places.] causing distribution of the updated three-dimensional interaction point map to a fleet of vehicles over one or more computer networks. [See Iagnemma [0107] 3D model of the local environment. Also, see 0021, The stored data is updated based on changes in the perceptions of actual conditions. Also, see 0022, Distributing information from one of the vehicles to other vehicles of the fleet via crowd sourcing.] Regarding claim 10, Iagnemma discloses the method of claim 1. Furthermore, Iagnemma discloses further comprising: detecting the static obstacles in the physical environment based on the real-time sensor data, including at least one of a fire hydrant, a crosswalk, or a parking restriction, wherein each static obstacle is determined to be within a predetermined distance from at least one of the candidate interaction points. [See Iagnemma [0026] Stopping places with specified thresholds for acceptability, and the information includes whether the vehicle can legally stop at the potential stopping place. Also, see 0070-0071, parking restrictions.] Regarding claim 11, see examiners rejection for claim 1 which is analogous and applicable for the rejection of claim 11. Regarding claim 12, see examiners rejection for claim 2 which is analogous and applicable for the rejection of claim 12. Regarding claim 16, see examiners rejection for claim 1 which is analogous and applicable for the rejection of claim 16. Regarding claim 17, see examiners rejection for claim 2 which is analogous and applicable for the rejection of claim 17. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3, 5, 13, 15, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Iagnemma (US 20180113455) in view of Kentley et al. (herein after will be referred to as Kentley) (US 20170132934). Regarding claim 3, Iagnemma discloses the method of claim 2. Furthermore, Iagnemma does not explicitly disclose wherein the identifying the candidate interaction points comprises: disambiguating physical structures within a target geographic area based on the real-time sensor data and the historical map data; and identifying the candidate interaction points based on the disambiguated physical structures. However, Kentley does disclose wherein the identifying the candidate interaction points comprises: disambiguating physical structures within a target geographic area based on the real-time sensor data and the historical map data; and [See Kentley [0057] Comparing sensor data associated with surfaces of buildings with 3D map data.] identifying the candidate interaction points based on the disambiguated physical structures. [See Kentley [0103-0104] identify points to park AV.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Iagnemma to add the teachings of Kentley, in order to facilitate updated implementations for an autonomous vehicle which are better suited for the purpose of addressing safety risks while navigating the AV [See Kentley [0006-0007]]. Regarding claim 5, Iagnemma discloses the method of claim 1. Furthermore, Iagnemma does not explicitly disclose wherein the identifying the candidate interaction points comprises: applying image segmentation techniques to images of physical structures based on the real-time sensor data captured by the one or more sensors to identify boundaries between the physical structures. However, Kentley does disclose wherein the identifying the candidate interaction points comprises: applying image segmentation techniques to images of physical structures based on the real-time sensor data captured by the one or more sensors to identify boundaries between the physical structures. [See Kentley [0109] Segment portions of image data to distinguish objects from each other (i.e. the boundaries of the objects will indeed be determined inherently such that the objects are distinguished).] Applying the same motivation as applied in claim 3. Regarding claim 13, see examiners rejection for claim 3 which is analogous and applicable for the rejection of claim 13. Regarding claim 15, see examiners rejection for claim 5 which is analogous and applicable for the rejection of claim 15. Regarding claim 18, see examiners rejection for claim 3 which is analogous and applicable for the rejection of claim 18. Regarding claim 20, see examiners rejection for claim 5 which is analogous and applicable for the rejection of claim 20. Claims 4, 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Iagnemma (US 20180113455) in view of Kentley (US 20170132934) and in further view of Assaf et al. (herein after will be referred to as Assaf) (US 20180165518). Regarding claim 4, Iagnemma discloses the method of claim 1. Furthermore, Iagnemma does not explicitly disclose wherein the identifying the candidate interaction points comprises: disambiguating physical structures using LiDAR data in the real-time sensor data to distinguish between buildings based on differences in at least one of construction materials, geometric shapes, or foliage density. However, Kentley does disclose wherein the identifying the candidate interaction points comprises: disambiguating physical structures using LiDAR data in the real-time sensor data to distinguish between buildings [See Kentley [0073] Object detector to distinguish objects relative to other features in the environment. Also, see 0072, detection of objects using Lidar data. Also, see 0104, identify points to park AV.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Iagnemma to add the teachings of Kentley, in order to facilitate updated implementations for an autonomous vehicle which are better suited for the purpose of addressing safety risks while navigating the AV [See Kentley [0006-0007]]. Iagnemma (modified by Kentley) do not explicitly disclose However, Assaf does disclose [See Assaf [0030] Use Lidar data to detect the presence and shapes of objects and to distinguish objects from each other.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Iagnemma (modified by Kentley) to add the teachings of Assaf, in order to use obvious shapes of buildings to assist an object detector. This will provide improved object recognition [See Assaf [0019]]. Regarding claim 14, see examiners rejection for claim 4 which is analogous and applicable for the rejection of claim 14. Regarding claim 19, see examiners rejection for claim 4 which is analogous and applicable for the rejection of claim 19. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Iagnemma (US 20180113455) in view of Kentley (US 20170132934) and in further view of Bae et al. (herein after will be referred to as Bae) (US 20180099661). Regarding claim 7, Iagnemma discloses the method of claim 1. Furthermore, Iagnemma does not explicitly disclose further comprising: determining trajectories of the dynamic objects by tracking motion vectors of the dynamic objects over time using data from LiDAR and optical cameras; and predicting future positions of the dynamic objects based on the trajectories of the dynamic objects. However, Kentley does disclose predicting future positions of the dynamic objects based on the trajectories of the dynamic objects. [See Kentley [0067] Determine future state of objects via prediction, tracking the object. Also, see 0057, predict the behavior of external objects.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Iagnemma to add the teachings of Kentley, in order to facilitate updated implementations for an autonomous vehicle which are better suited for the purpose of addressing safety risks while navigating the AV [See Kentley [0006-0007]]. Iagnemma (modified by Kentley) do not explicitly disclose further comprising: determining trajectories of the dynamic objects by tracking motion vectors of the dynamic objects over time using data from LiDAR and optical cameras; and However, Bae does disclose further comprising: determining trajectories of the dynamic objects by tracking motion vectors of the dynamic objects over time using data from LiDAR and optical cameras; and [See Bae [0006] Sensors include Camera and Lidar. Also, see 0113, Tracking the motion of objects vis motion vectors.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Iagnemma (modified by Kentley) to add the teachings of Bae, in order to utilize obvious image processing techniques (such as motion vectors for motion prediction) for determining/predicting the future state of an object in Kentley. This will improve upon driving safety [See Bae [0007]]. Regarding claim 8, Iagnemma (modified by Kentley and Bae) disclose the method of claim 7. Furthermore, Iagnemma discloses wherein the filtering the candidate interaction points to exclude interaction points that are obstructed by the static and dynamic objects within the selected period of time comprises: filtering the candidate interaction points to exclude interaction points that are obstructed by the dynamic objects [See Iagnemma [0121] Determining when an area for stopping is valid when the stopping area is occupied by other vehicles, objects such as a fallen tree or construction debris, and a moving object such as a pedestrian. Also, see 0021, stored data is maintained indicative of potential stopping places and 0021, current signals are received that represent perceptions of actual conditions at one or more of the potential stopping places. Also, see 0010, AV generates control actions based on both real-time sensor data and prior information.] Iagnemma does not explicitly disclose However, Kentley does disclose [See Kentley [0067] Determine future state of objects via prediction, tracking the object. Also, see 0057, predict the behavior of external objects.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Iagnemma to add the teachings of Kentley, in order to facilitate updated implementations for an autonomous vehicle which are better suited for the purpose of addressing safety risks while navigating the AV [See Kentley [0006-0007]]. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Iagnemma (US 20180113455) in view of Kentley (US 20170132934) in view of Bae (US 20180099661) and in further view Nedrich et al. (herein after will be referred to as Nedrich) (Detecting behavioral zones in local and global camera views). Regarding claim 9, Iagnemma (modified by Kentley and Bae) disclose the method of claim 7. Furthermore, Iagnemma does not explicitly disclose wherein the motion vectors are aggregated to improve prediction accuracy of the predicted future positions of the dynamic objects relative to the candidate interaction points However, Nedrich does disclose wherein the motion vectors are aggregated to improve prediction accuracy of the predicted future positions of the dynamic objects relative to the candidate interaction points. [See Nedrich [Introduction] An important step when seeking to understand a scene is to identify regions where activity enters and exits, and may correspond to a doorway. For tasks, such as object tracking, entry regions allow for more knowledgeable tracker initialization. Also, see Abstract, These observations (i.e. tracking data) are then clustered to produce a set of potential entry and exit regions within a scene.] It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Iagnemma (modified by Kentley and Bae) to add the teachings of Nedrich, in order to cluster object trajectories to detect entry or exit regions within a scene to improve upon monitoring traffic at these regions [See Nedrich [Introduction last paragraph]]. This will improve upon the AV parking in Iagnemma by monitoring traffic at these regions (i.e. a parking spot by a busy office door will need to be monitored more closely). Allowable Subject Matter Claim 9 would be allowable if rewritten to overcome the rejection(s) under Double Patenting set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T BOYLAN whose telephone number is (571)272-8242. The examiner can normally be reached Monday-Friday 7am-3pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES T BOYLAN/Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Jul 11, 2024
Application Filed
Nov 12, 2024
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598400
LIGHT FIELD MICROSCOPE-BASED IMAGE ACQUISITION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12587635
AFFINE MERGE MODE WITH TRANSLATIONAL MOTION VECTORS
2y 5m to grant Granted Mar 24, 2026
Patent 12587752
TENSORIAL TOMOGRAPHIC FOURIER PTYCHOGRAPHY
2y 5m to grant Granted Mar 24, 2026
Patent 12581196
GUIDED REAL-TIME VEHICLE IMAGE ANALYZING DIGITAL CAMERA WITH AUTOMATIC PATTERN RECOGNITION AND ENHANCEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579616
ENHANCED EXTENDED DEPTH OF FOCUSING ON BIOLOGICAL SAMPLES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
63%
Grant Probability
74%
With Interview (+11.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 487 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month