Prosecution Insights
Last updated: April 19, 2026
Application No. 18/370,829

ASSOCIATING DETECTED OBJECTS AND TRAFFIC LANES USING COMPUTER VISION

Final Rejection §102§103
Filed
Sep 20, 2023
Examiner
KRAYNAK, JACK PETER
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Torc Robotics, Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
75 granted / 96 resolved
+16.1% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
114
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
27.3%
-12.7% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 96 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendment filed on 2/16/2026: Amended claims 1 and 11, objection has been withdrawn. Overcomes the rejections under 112(b). Claims 1-20 are pending. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 5-8, 10-13, 15-18, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Abbott et al (US 20210042535 A1). Regarding claim 1, Abbott et al teaches a method for managing location information in automated vehicles, the method comprising (Para 77 and Fig 5, FIG. 5 is a flow diagram showing a method 500 for determining lane assignments for objects in an environment): obtaining, by a processor of an automated vehicle, image data from a camera on the automated vehicle, the image data includes a digital image of a roadway as a digital representation of imagery in a field-of-view of the camera including an operational environment with one or more objects and a roadway having a plurality of lanes (Para 78, the method 500, at block B504, includes identifying, using the object fence algorithm, a world space object fence for an object within a sensory field of view of at least one sensor of the one or more sensors. For example, the object fence algorithm 108 may be used to identify the object fence 110 in world space for the object within the sensory field of at least one sensor of the one or more sensors from which sensor data 102 is received. i.e. obtaining image data from a camera (automated vehicle Para 26 and 34) including a digital image of a roadway (at least one sensor, see Para 36-37) capturing a field-of-view of the camera capturing a roadway having a plurality of lanes); identifying, by the processor, the plurality of lanes in the digital image of the roadway (Para 80, The method 500, at block B508, includes receiving lane data representative of location of a plurality of lane lines within an image represented by the sensor data. For examples, lane data representative of location of a plurality of lane lines within an image represented by the sensor data 102 may be received via the lane detection 112 and furthermore, Para 81, the method 500, at block B510, includes generating a lane mask using the lane data. For example, the lane mask 116 may be generated—e.g., using lane triangulation 114—using the lane data received after lane detection 112. i.e. identifying the plurality of lanes in the digital image of the roadway); identifying, by the processor, in the image data a vehicle as an object situated in the roadway (Para 77-83, the method 500, at block B504, includes identifying, using the object fence algorithm, a world space object fence for an object within a sensory field of view of at least one sensor of the one or more sensors. For example, the object fence algorithm 108 may be used to identify the object fence 110 in world space for the object within the sensory field of at least one sensor of the one or more sensors from which sensor data 102 is received. i.e. the object is a vehicle situated in the roadway and is identified by the processor see also Para 40-45 and Fig 7A); generating, by the processor, a plurality of data, the digital image segmented into the plurality of segments, the plurality of segments including at least one of i) image segments of the vehicle or ii) a lane of the plurality of lanes (Para 40-45 Para 77-90 and Fig 7A, Regarding object (vehicle): Para 41, the output of object detection 104 may include points (e.g., pixels) in the image(s) where objects are determined to be located. In such examples, the points may include each of the points or pixels within the bounding shape. As such, the bounding shape, in addition to an object mask generated within the bounding shape, may define each of the pixels corresponding to the object. This process may of object detection 104 may be carried out for any number of objects in each frame. i.e. the detected vehicle is segmented (masked) from the image data including the pixels corresponding to the object). Regarding lane lines: Para 80, The method 500, at block B508, includes receiving lane data representative of location of a plurality of lane lines within an image represented by the sensor data. For examples, lane data representative of location of a plurality of lane lines within an image represented by the sensor data 102 may be received via the lane detection 112 and furthermore, Para 81, the method 500, at block B510, includes generating a lane mask using the lane data. For example, the lane mask 116 may be generated—e.g., using lane triangulation 114—using the lane data received after lane detection 112. i.e. identifying the plurality of lanes in the digital image of the roadway, that are segments (masked) including the lane line pixels); each image segment of the vehicle containing a portion of the vehicle in the image data, each image segment of the vehicle including a plurality of pixels (Fig 2A-2D and 7A-7B, Para 41-47, the output of object detection 104 may include points (e.g., pixels) in the image(s) where objects are determined to be located. In such examples, the points may include each of the points or pixels within the bounding shape. As such, the bounding shape, in addition to an object mask generated within the bounding shape, may define each of the pixels corresponding to the object. This process may of object detection 104 may be carried out for any number of objects in each frame. Furthermore, Para 46, the object fence(s) 110 may be generated as points (e.g., pixels and/or vertices) of a polygon corresponding to the object fence(s) 110. As such, by connecting the vertices, a polygon corresponding to the object fence(s) 110 may be generated, and the pixels within the object fence(s) 110 may be determined to correspond to the object for lane assignment. The object fence(s) 110 may be generated prior to lane assignment 120—using any method, such as those described herein—to improve accuracy and reliability of lane assignment predictions by more closely defining the shape or footprint of the object on a driving surface. For example, once the object fences 110 are determined, the object fences 110 may be used—in combination with the lane masks 116—to make an overlap determination 118 for lane assignment 120. i.e. the object fence contains pixels from the detected object (vehicle) and therefore can be considered segments of the vehicle that contain a portion of the vehicle in the image data), each segment of the vehicle including a plurality of pixels (polygon that only contains detected pixels of the detected vehicle can be considered a segment of the vehicle containing a portion of the vehicle data, each segment including a plurality of pixels)); identify an overlap between i) one or more of the pixels of the plurality of the image segments of the vehicle and ii) the lane, and detecting, by the processor, the lane containing at least a portion of the vehicle in response to determining that at least one image segment of the vehicle overlaps the lane in the image data of the roadway (Para 82-83, and Para 89, the method 500, at block B512, includes determining an overlap between the image space object fence and one or more lanes defined by the lane mask. For example, overlap between object fence 110 and the one or more lanes defined by the lane mask 116 may be determined by overlap determination 110 using pixel counting 132 and/or boundary scoring 134. The method 500, at block B514, includes assigning the object to at least one of the one or more lanes based at least in part on the overlap. For example, lane assignment 120 may be used to assign the object to at least one of the one or more lanes in the lane mask 116. i.e. identifying an overlap between one or more of the image segment pixels of the vehicle and the lane, and assigning the object to a lane based on the overlap). Regarding claim 2, Abbott et al teaches the method according to claim 1, further comprising determining, by the processor, an object position of the object in the image data relative to the automated vehicle, the object position including a predicted distance and a predicted angle relative to the automated vehicle (Para 47, future locations and/or assignments of objects in the environment may be determined using the sensor data 102 (e.g., LIDAR data, SONAR data, image data, RADAR data, etc.) and the object fence 110. For example, once the object fence 110 is determined, the sensor data 102—representative of speed, velocity, acceleration, yaw rate, etc.—may be used to determine a future path or trajectory of the objects in the environment to determine one or more future locations (e.g., 0.5 seconds in the future, 1 second in the future, etc.). For example, a linear dynamic model (e.g., future position=current position+(velocity×time)), a constant acceleration model, a constant turn-rate model, a machine learning model, another algorithm type, and/or a combination thereof may use the sensor data (or data represented thereby) to determine a future location(s) (e.g., in world space) of the object. Once the future location(s) are known, the object fence 110 may be generated (e.g., in image space) using the future location and the object fence 110 information. i.e. the object position of the image data relative to the automated vehicle (sensor) includes distance and angle (velocity, yaw rate, trajectory)). Regarding claim 3, Abbott et al teaches the method according to claim 2, further comprising generating, by the processor, a bounding box for the vehicle in the image data (Fig 2A-2C, Para 27). Regarding claim 5, Abbott et al teaches the method according to claim 1, further comprising, for each driving lane of the one or more lanes, applying, by the processor, to the image data a lane label associated with the particular lane and indicating a lane index value (Para 53 and 87, as a result of lane triangulation 114 and/or another mask generation approach, the lane mask 116 may be generated to represent pixels in image space that correspond lanes or other portions of the driving surface. In some examples, each pixel in the lane mask 116 may be indexed as a color corresponding to a respective lane (e.g., with respect to visualization 660 of FIG. 6D, pixels for a left lane 662 may be a first color or pattern, pixels for a middle or ego-lane 664 may be a second color or pattern, and pixels for a third lane 666 may be a third color or pattern). i.e. a lane label associated with the particular lane and indicating a lane index value). Regarding claim 6, Abbott et al teaches the method according to claim 1, further comprising applying, by the processor, to the image data a vehicle object label indicating a lane index value for the driving lane having the vehicle and lane information for the vehicle (Para 58, the pixels in the entirety of the object 110 fence may be matched against the lane pixels for each lane in the lane mask 116, and the object may be assigned to one or more lanes based on the pixel counts for object fence 110 pixels in each lane. The pixel index (e.g., color or pattern, as described with respect to FIG. 6D) associated with lane mask 116 may then be used to determine what lane the particular pixel overlaps with. In some examples, in order to associate an object with a lane, a number of overlapping pixels may be required to be above a threshold (e.g., greater than twenty pixels, greater than forty pixels, greater than ten percent of the pixels of the object fence 110, etc.). By using a threshold, false positives may be reduced because object fences 110 that slightly overlap with a lane may be discounted (which may account for some inaccuracy in the object fence 110). i.e. applying a vehicle object label indicating the lane index value for the driving lane having the vehicle and lane information for the vehicle). Regarding claim 7, Abbott et al teaches the method according to claim 1, further comprising comparing, by the processor, location information in a vehicle object associated with the image segment of the vehicle against lane location information in a lane label associated with the lane (Para 57-58 and 77-89, the pixels in the entirety of the object 110 fence may be matched against the lane pixels for each lane in the lane mask 116, and the object may be assigned to one or more lanes based on the pixel counts for object fence 110 pixels in each lane. The pixel index (e.g., color or pattern, as described with respect to FIG. 6D) associated with lane mask 116 may then be used to determine what lane the particular pixel overlaps with. In some examples, in order to associate an object with a lane, a number of overlapping pixels may be required to be above a threshold (e.g., greater than twenty pixels, greater than forty pixels, greater than ten percent of the pixels of the object fence 110, etc.). By using a threshold, false positives may be reduced because object fences 110 that slightly overlap with a lane may be discounted (which may account for some inaccuracy in the object fence 110). i.e. comparing the image segment of the vehicle against lane location information in a lane label associated with the lane). Regarding claim 8, Abbott et al teaches the method according to claim 1, further comprising comparing, by the processor, a first set of one or more pixels containing the image segment of the vehicle against a second set of one or more pixels containing the lane (Para 57-58 and 77-89, Once the object fences and the lane mask are determined, the objects may be associated with the lanes by determining overlap between the pixels of the object fence and the lane mask in 2D image space. The object may be assigned to a lane(s) in the lane mask based on an overlap of pixels in the object fence with lane pixels in the lane mask. In some examples, each object may be assigned to a lane(s) based on a simple pixel count. In other examples, the object fence may be represented by vertices (or pixels corresponding thereto) along a perimeter of the object fence. Where an object is in a single lane, the pixel distances between each set of two perimeter pixels may be calculated, added up, and normalized to create a ratio of intersection per lane, which would be 1/1 for a single lane. Where an object is in more than one lane, a set of points that crosses the lane boundary may be determined. Once the two pixels are determined, a new vertex may be generated at the crossing between the two pixels. This pixel may then be used to determine the distance between the new vertex and each other perimeter pixel or vertex on either side of the crossing. A first sum of distances between the new vertex and a first set of perimeter pixels for a first lane may be calculated and a second sum of distances between the new vertex and a second set of perimeter pixels for a second lane may be calculated. These sums may then be normalized, in some embodiments. Ultimately, a ratio of intersection per lane may be determined using the first sum and the second sum. Knowing the ratio of intersection, and how the ratio changes from frame to frame, may provide an indication of the trajectory of the object (e.g., switching lanes, swerving, lane keeping, etc.). i.e. comparing a first set of pixels containing the image segment of the vehicle against a second set of one or more pixels containing the lane (multiple lanes)). Regarding claim 10, Abbott et al teaches the method according to claim 1, wherein the processor obtains the image data from a plurality of cameras of the automated vehicle (Para 40, where object detection 104 and freespace detection 106 are used to generate the object fence 110, the sensor data 102 may be used as an input to both the object detection 104 and the freespace detection 106. For example, object detection 104 may use the sensor data 102 (e.g., image data from one or more cameras, LIDAR data from one or more LIDAR sensors, RADAR data from one or more RADAR sensors, etc.) to detect objects (e.g., vehicles, pedestrians, bicycles, debris, etc.) and generate bounding shapes (e.g., bounding boxes, circles, polygons, etc.) for the detected objects. i.e. obtaining the image data from a plurality of cameras of the automated vehicle). Regarding claims 11-13, 15-18, and 20, claims 11-13, 15-18, and 20, rejected for the same reasons as 1-3, 5-8, and 10 for the same reasons above, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abbott et al (US 20210042535 A1) in view of Yu et al (US 20190272433 A1). Regarding claim 4, Abbott et al does not teach, the method according to claim 1, further comprising, for each portion of the vehicle, generating, by the processor, a bounding box for the portion of the vehicle in the image data. In a similar field of endeavor, Yu et al teaches, the method according to claim 1, further comprising, for each portion of the vehicle, generating, by the processor, the bounding box for the portion of the vehicle in the image data (Para 59 and Fig 7, in the first training phase of the first classifier 211, the autonomous vehicle occlusion detection system 210 can partition the bounding box for teach vehicle object detected in the static image into a plurality of portions. i.e. See Fig 3). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Abbott et al (US 20210042535 A1) in view of Yu et al (US 20190272433 A1) so that the method comprises for each portion of the vehicle, generating, by the processor, the bounding box for the portion of the vehicle in the image data. Doing so would effectively and efficiently detect the occlusion status of each vehicle from a set of input images (Para 49, Yu et al). Regarding claim 14, claim 14 rejected for the same reasons as claim 4 in the combination above. Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abbott et al (US 20210042535 A1) in view of Agarwal et al (US 20240096111 A1). Regarding claim 9, Abbott et al does not teach, the method according to claim 1, wherein identifying the object includes predicting, by the processor, an object class for the object by applying an object recognition engine on a single frame of the image data. In a similar field of endeavor, Agarwal et al teaches, the method according to claim 1, wherein identifying the object includes predicting, by the processor, an object class for the object by applying an object recognition engine on a single frame of the image data (Para 27, object detection and tracking can be used to identify an object and track the object over time. For example, an image of an object can be obtained, and object detection can be performed on the image to detect one or more objects in the image. In some cases, the detected object can be classified into a category of object and a bounding box can be generated to identify a position of the object in the image. Various types of systems can be used for object detection, including neural network-based object detectors. single frame con be interpreted as one image of the input data, and object is detected. See Para 40 regarding one image frame). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Abbott et al (US 20210042535 A1) in view of Agarwal et al (US 20240096111 A1) so that identifying the object includes predicting, by the processor, an object class for the object by applying an object recognition engine on a single frame of the image data. Doing so would accurately identify such characteristics of the target object, including the locations, orientations, sizes, and/or other characteristics (Para 28, Agarwal et al). Regarding claim 19, claim 19 rejected for the same reasons as claim 9 in the combination above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20250054269 A1 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK PETER KRAYNAK whose telephone number is (703)756-1713. The examiner can normally be reached Monday - Friday 7:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK PETER KRAYNAK/Examiner, Art Unit 2668 /UTPAL D SHAH/Primary Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Nov 12, 2025
Non-Final Rejection — §102, §103
Jan 27, 2026
Applicant Interview (Telephonic)
Jan 27, 2026
Examiner Interview Summary
Feb 16, 2026
Response Filed
Mar 09, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602819
IMAGE PROCESSING APPARATUS, FEATURE MAP GENERATING APPARATUS, LEARNING MODEL GENERATION APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592065
SYSTEMS AND METHODS FOR OBJECT DETECTION IN EXTREME LOW-LIGHT CONDITIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12586210
BIDIRECTIONAL OPTICAL FLOW ESTIMATION METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12579720
METHOD OF GENERATING TRAINED MODEL, MACHINE LEARNING SYSTEM, PROGRAM, AND MEDICAL IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12568314
IMAGE SIGNAL PROCESSOR, METHOD OF OPERATING THE IMAGE SIGNAL PROCESSOR, AND APPLICATION PROCESSOR INCLUDING THE IMAGE SIGNAL PROCESSOR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
97%
With Interview (+18.8%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 96 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month