Prosecution Insights
Last updated: April 19, 2026
Application No. 18/614,287

SAFETY DECOMPOSITION USING REDUNDANT FIELD OF VIEW OF MULTIPLE SENSORS

Non-Final OA §102
Filed
Mar 22, 2024
Examiner
MEMON, OWAIS IQBAL
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Torc Robotics, Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
97%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
75 granted / 101 resolved
+12.3% vs TC avg
Strong +22% interview lift
Without
With
+22.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
27 currently pending
Career history
128
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
30.6%
-9.4% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 101 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings were received on 3/22/2024. These drawings are accepted. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: electronic control unit in claims 1, 8 and 15 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sen et al. (US20240161342, hereinafter “Sen”) Claim 1. Sen teaches A perception system, comprising: a first image sensor ([0028] “an image sensor”) configured to capture first calibration image data ([0028] “image data generated … for calibrating the image sensor”) in a first field of view, and subsequently capture first image data in the first field of view; ([0002] “array of cameras with various fields-of-view to capture visual information of an environment surrounding the vehicle,”) a second image sensor ([0028] “LiDAR sensor”) configured to capture second calibration image data ([0028]“ LiDAR data generated…for calibrating the image sensor with respect to the LiDAR sensor.”) in a second field of view, and subsequently capture second image data in the second field of view; ([0002] “array of cameras with various fields-of-view to capture visual information of an environment surrounding the vehicle,”) a first electronic control unit (ECU) ([0115] “The controller(s) 1136 may include a first controller” and [0219] “electronic control unit (ECU),”) coupled to the first image sensor, ([0116] “The controller(s) 1136 may provide the signals for controlling one or more components and/or systems of the vehicle 1100 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, … stereo camera(s)”) the first ECU comprising at least one memory ([0219] “memory”) configured to store machine executable instructions and at least one processor configured to execute the stored executable instructions; ([0039] “processor executing instructions stored in memory.” And [0128]) a second ECU ([0115] “a second controller 1136” and [0219] “electronic control unit (ECU),”) coupled to the second image sensor, ([0116] “The controller(s) 1136 may provide the signals for controlling one or more components and/or systems of the vehicle 1100 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, … LiDAR sensor(s) 1164”) wherein the second ECU comprising at least one memory ([0219] “memory”) configured to store machine executable instructions and at least one processor configured to execute the stored machine executable instructions; ([0039] “processor executing instructions stored in memory.” And [0128]) wherein each of the first ECU and the second ECU is configured to: receive the first calibration image data and the second calibration image data; ([0115] “two or more controllers 1136 may handle a single functionality,”) and perform feature detection ([0028] “process the image data in order to determine, for each image of at least two images, feature points”) for calibration using the first calibration image data and the second calibration image data; ([0028] “calibrating the image sensor with respect to the LiDAR sensor.”) wherein the first ECU is further configured to: identify a first set of pixels in the first field of view having common features with a second set of pixels in the second field of view; ([0028] “The system(s) may then track (e.g., associate, group, etc.) features points between two images. For instance, if a first feature point of a first image generated at a first time depicts a same feature in the environment as a second feature point of a second image generated at a second time, the system(s) may track the first feature point of the first image to the second feature point of the second image.”) receive the first image data from the first image sensor; ([0028] “use image data generated using an image sensor”) reduce the first image data to only the first set of pixels in the first field of view; ([0076] “perform a local cropping process 612 on the first image 602 to generate a first cropped image 614.”) and perform object detection on the first image data consisting of the first set of pixels; ([0123] “identify forward facing paths and obstacles…pedestrian detection, and collision avoidance” and [0124] “stereo camera pair may be used for depth-based object detection”) and wherein the second ECU is further configured to: identify the second set of pixels in the second field of view having the common features with the first set of pixels in the first field of view; ([0028] “The system(s) may then track (e.g., associate, group, etc.) features points between two images. For instance, if a first feature point of a first image generated at a first time depicts a same feature in the environment as a second feature point of a second image generated at a second time, the system(s) may track the first feature point of the first image to the second feature point of the second image.”) receive the second image data from the second image sensor; ([0028] “LiDAR data generated using a LiDAR sensor”) reduce the second image data to only the second set of pixels in the second field of view; ([0076] “perform a local cropping process 616 on the second image 604 to generate a second cropped image 618”) and perform object detection on the second image data consisting of the second set of pixels. ([0123] “identify forward facing paths and obstacles…pedestrian detection, and collision avoidance” and [0157] “estimates of the object obtained from … other sensors (e.g., LiDAR sensor(s) 1164 or RADAR sensor(s) 1160)” Claim 2. Sen teaches The perception system of claim 1, wherein the first image sensor and the second image sensor are camera sensors. ([0116] “The sensor data may be received from, …stereo camera(s)”) Claim 3. Sen teaches The perception system of claim 1, wherein the first image sensor and the second image sensor are radio detection and ranging (RADAR) sensors. ([0116] “The sensor data may be received from, … RADAR sensor(s) 1160”) Claim 4. Sen teaches The perception system of claim 1, wherein the first image sensor and the second image sensor are light detection and ranging (LiDAR) sensors. ([0116] “The sensor data may be received from, …LiDAR sensor(s) 1164,”) Claim 5. Sen teaches The perception system of claim 1, wherein the first ECU is further configured to verify the object detection performed using the first set of pixels matches with the object detection performed using the second set of pixels by the second ECU; and the second ECU is further configured to verify the object detection performed using the second set of pixels matches with the object detection performed using the first set of pixels by the first ECU. ([0204] “the vehicle 1100 itself must …decide whether to heed the result from a primary computer or a secondary computer (e.g., a first controller 1136 or a second controller 1136). For example, in some embodiments, the ADAS system 1138 may be a backup and/or secondary computer for providing perception information to a backup computer rationality module. The backup computer rationality monitor may run a redundant diverse software on hardware components to detect faults in perception and dynamic driving tasks.”) Claim 6. Sen teaches The perception system of claim 5, wherein the instructions further cause each of the first ECU and the second ECU to upon the object detection performed using the first set of pixels not matching with the object detection performed using the second set of pixels by the second ECU, ([0052] “The optimization component 116 may then determine differences associated with multiple projected points and features points associated with the multiple images.”) calibrate the perception system by re-identifying the first set of pixels in the first field of view having common features with the second set of pixels in the second field of view. ([0052] “perform similar processes to project feature points between multiple other images represented by the image data 112”) Claim 7. Sen teaches The perception system of claim 5, wherein the first set of pixels and the second set of pixels include one or more lane identification markers. ([0155] “perform computer stereo vision… lane detection,”) Claim 8. The method herein has been executed and performed by the system of claim 1 and is likewise rejected Claim 9. The method herein has been executed and performed by the system of claim 2 and is likewise rejected Claim 10. The method herein has been executed and performed by the system of claim 3 and is likewise rejected Claim 11. The method herein has been executed and performed by the system of claim 4 and is likewise rejected Claim 12. The method herein has been executed and performed by the system of claim 5 and is likewise rejected Claim 13. The method herein has been executed and performed by the system of claim 6 and is likewise rejected Claim 14. The method herein has been executed and performed by the system of claim 7 and is likewise rejected Claim 15. The system herein has been executed and performed by the system of claim 1 and is likewise rejected Claim 16. The system herein has been executed and performed by the system of claim 2 and is likewise rejected Claim 17. Sen teaches The vehicle of claim 15, wherein the first image sensor and the second image sensor are radio detection and ranging (RADAR) sensors, ([0116] “The sensor data may be received from, … RADAR sensor(s) 1160”) or the first image sensor and the second image sensor are light detection and ranging (LiDAR) sensors. ([0116] “The sensor data may be received from, …LiDAR sensor(s) 1164,”) Claim 18. The system herein has been executed and performed by the system of claim 5 and is likewise rejected Claim 19. The system herein has been executed and performed by the system of claim 6 and is likewise rejected Claim 20. The system herein has been executed and performed by the system of claim 7 and is likewise rejected Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Tzabari et al US10701336 teaches left image and right image matching based on feature points. Narayanan et al US20250264599 teaches synchronizing image, lidar and radar input data. Gupta et al NPL “Dual Image Cropping Algorithm for Enabling Redundant ASIL-D Safe Lane Detection” teaches the same invention however the publication date is after the filing date of the instant invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OWAIS MEMON whose telephone number is (571)272-2168. The examiner can normally be reached M-F (7:00am - 4:00pm) CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OWAIS I MEMON/Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Mar 22, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §102
Apr 06, 2026
Interview Requested
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597224
SYSTEM AND METHOD FOR FEATURE SUB-IMAGE DETECTION AND IDENTIFICATION IN A GIVEN IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12591989
METHOD FOR DEPTH ESTIMATION AND HEAD-MOUNTED DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12592013
REAL SCENE IMAGE EDITING METHOD BASED ON HIERARCHICALLY CLASSIFIED TEXT GUIDANCE
2y 5m to grant Granted Mar 31, 2026
Patent 12586338
SYSTEM FOR UPDATING NEURAL NETWORK PARAMETERS BASED ON OBJECT DETECTION AREA OVERLAP SCORE
2y 5m to grant Granted Mar 24, 2026
Patent 12573069
SYSTEMS AND METHODS FOR GENERATING AND CODING MULTIPLE FOCAL PLANES FROM TEXTURE AND DEPTH
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
97%
With Interview (+22.4%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month