Prosecution Insights
Last updated: April 19, 2026
Application No. 18/643,620

TRUCK BED SURROUND VIEW CAMERA SYSTEM WITH AUTOMATED CAMERA VIEW SELECTION

Final Rejection §102§103
Filed
Apr 23, 2024
Examiner
PICON-FELICIANO, RUBEN
Art Unit
3747
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Ford Global Technologies LLC
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
82%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
483 granted / 708 resolved
-1.8% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
61 currently pending
Career history
769
Total Applications
across all art units

Statute-Specific Performance

§101
1.0%
-39.0% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
37.2%
-2.8% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 708 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is sent in response to Applicant's Communication received on October 15, 2025. Response to Arguments Applicant’s amendments/remarks filed October 15, 2025, with respect to claims 13-19 rejections under 35 U.S.C. 112(b) have been fully considered and are persuasive. Accordingly, said claims 13-19 rejections under 35 U.S.C. 112(b) have been withdrawn. Applicant’s amendments/remarks filed October 15, 2025, with respect to new claim 21 have been fully considered and are persuasive. Accordingly, said new claim 21 is objected as allowable subject matter. Applicant’s amendments/remarks filed October 15, 2025, with respect to the claims 1. 3-7 and 9-20 rejections under 35 U.S.C. 102(a)(1) as being anticipated by (Raeis – US 2022/0410804 A1) have been fully considered but they are not persuasive as explained below. Applicant respectfully asserts that the cited prior art fails to discloses the limitations “…wherein the vehicle defines a rear portion and a cab having a rear edge forward of the rear portion of the vehicle and the first camera is mounted adjacent the rear edge of the cab…”. The Examiner respectfully submits that Raeis discloses imaging system 18 can include rear camera 48 alone or can be configured such that system 10, as discussed further below, selectively utilizes only rear camera 48 and CHMSL camera 50 in a vehicle with multiple exterior cameras. In another example, the various cameras 48, 50, 52a, 52b included in imaging system 18 can be positioned to generally overlap in their respective fields of view, which may correspond with rear camera 48, CHMSL camera 50, and side-view cameras 52a and 52b, respectively. In this manner, image data 55 from two or more of the cameras can be combined in image processing routine 64, or in another dedicated image processor within imaging system 18, into a single image for purposes of identifying the coupling feature 14 using the image processing routine 64 ([0046]). However, the Examiner recognizes the differences between the Present Application and the cited prior art by Raeis. More specifically, the Present Application discloses a camera is mounted along a side wall of the truck bed and is aligned with a rear axle of the vehicle as shown in Present Application Figures 5A-5G. Accordingly, in view of the above, the Examiner indicated new claim 21 as allowable subject matter. Disposition of Claims Claims 1 and 3-21 are pending in this application. Claim 21 is objected as allowable subject matter. Claims 1 and 3-20 are rejected. Allowable Subject Matter Claim 21 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3-7 and 9-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by (Raeis – US 2022/0410804 A1). Regarding claim 1, Raeis (Figs. 1-16) discloses: A system for assisting in aligning a vehicle (Vehicle 12: Fig. 1) for hitching with a trailer (Trailer 16: Fig. 1), comprising: an imaging system (imaging system 18: Fig. 2 and [0038, 0042, 0045-0046, 0056, 0065, 0070]) including: a first camera (first camera 48 or rear camera 48: Figs. 1-2) capturing image data of an area to a rear of the vehicle (12) and outputting the image data; and a second camera (second camera 50 or CHMSL camera 50: Figs. 1-2) positioned within a truck bed (bed 25 of the truck: Fig. 1) of the vehicle (Vehicle 12: Fig. 1), directed toward a center of the truck bed (bed 25 of the truck: Fig. 1), and capturing image data of a first area within the truck bed (bed 25 of the truck: Fig. 1), including the center of the truck bed (bed 25 of the truck: Fig. 1); and a controller (controller 26: Fig. 2 and [0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]): identifying at least one of the trailer (Trailer 16: Fig. 1) or a coupler (coupling feature 14: Fig. 1) of the trailer (Trailer 16: Fig. 1) within the image data from the first camera (first camera 48 or rear camera 48: Figs. 1-2) and determining that the trailer (Trailer 16: Fig. 1) is of a first trailer type that is configured to connect with a trailer hitch positioned within an area toward the center of the truck bed (bed 25 of the truck: Fig. 1) {[Abstract, 0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]: “A system for assisting in aligning a vehicle for hitching with a trailer includes a first camera and a second camera and outputting image data to the rear of the vehicle and a controller identifying at least one of the trailer or a coupling feature of the trailer within the image data and assigning a trailer type to the trailer identified within the image data, the trailer type including a first trailer type and a second trailer type. The controller further causes a portion of the image data to be presented on a display within the vehicle, the portion of the image data corresponding with the first camera in response to the trailer being assigned the first trailer type and corresponding with the second camera in response to the trailer being assigned the second trailer type” and “System 10 includes an imaging system 18 including a first camera 48 and a second camera 50 and outputting image data 55 to the rear 17 of the vehicle 12 and a controller 26 identifying at least one of the trailer 16 or a coupling feature 14 of the trailer 16 within the image data 55 and assigning a trailer type to the trailer 16 identified within the image data 55, the trailer type including a first trailer type and a second trailer type. The controller 26 further causes a portion of the image data 55 to be presented on a display 44 within the vehicle 12, the portion of the image data 55 corresponding with the first camera 48 in response to the trailer 16 being assigned the first trailer type and corresponding with the second camera 50 in response to the trailer 16 being assigned the second trailer type, presents a target image 45 over the portion of the image data 55 presented on the display 44, the target image 45 being smaller than a field of view 49 or 51 associated with the portion of the image data 55 and corresponding with a target position area 110 relative to the vehicle 12, and determines that the at least one of the coupling feature 14 and the trailer 16 is within the target position are 110 and outputs a steering signal 69 to the vehicle 12 to cause the vehicle 12 to steer to align a hitch 37 of the vehicle 12 with the coupling feature 14”}; and causing at least a portion of the image data from the second camera (second camera 50 or CHMSL camera 50: Figs. 1-2) to be presented on a display within the vehicle (12) in response to determining that the trailer (Trailer 16: Fig. 1) is of the first trailer type {[0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]}. Regarding claim 13, Raeis (Figs. 1-16) discloses: A vehicle (Vehicle 12: Fig. 1), comprising: a cab (vehicle cabin 15: Fig. 1) having a rear edge and a truck bed (bed 25 of the truck: Fig. 1) disposed rearward of the cab (15) and defining a rear portion of the vehicle (Vehicle 12: Fig. 1) ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); an imaging system (imaging system 18: Fig. 2 and [0038, 0042, 0045-0046, 0056, 0065, 0070]) including: a first camera (first camera 48 or rear camera 48: Figs. 1-2) mounted adjacent the rear edge of the cab (15) and capturing image data of an area to the rear of the vehicle (12) and outputting the image data ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and a second camera (second camera 50 or CHMSL camera 50: Figs. 1-2) positioned within the truck bed (bed 25 of the truck: Fig. 1), directed toward a center of the truck bed (bed 25 of the truck: Fig. 1), and capturing image data of a first area within the truck bed (bed 25 of the truck: Fig. 1), including the center of the truck bed (bed 25 of the truck: Fig. 1) ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and a controller (controller 26: Fig. 2 and [0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]): identifying at least one of a trailer (Trailer 16: Fig. 1) or a coupler (coupling feature 14: Fig. 1) of the trailer (16) within the image data from the first camera (first camera 48 or rear camera 48: Figs. 1-2) and determining that the trailer is of a first trailer type that is configured to connect with a trailer hitch positioned within an area toward the center of the truck bed (bed 25 of the truck: Fig. 1) ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and causing at least a portion of the image data from the second camera to be presented on a display within the vehicle in response to determining that the trailer is of the first trailer type ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 20, Raeis (Figs. 1-16) discloses: A method for assisting in aligning a vehicle (12) for hitching with a trailer (16), comprising: identifying at least one of the trailer (16) or a coupler (coupling feature 14: Fig. 1) of the trailer (16) within image data received from a first camera (first camera 48 or rear camera 48: Figs. 1-2) capturing image data of an area to the rear of the vehicle (12) ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); determining that the trailer (16) is of a first trailer type that is configured to connect with a trailer hitch positioned within an area toward the center of a bed of the vehicle (12) ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and causing at least a portion of image data from a second camera (second camera 50 or CHMSL camera 50: Figs. 1-2) to be presented on a display within the vehicle (12) in response to determining that the trailer (16) is of the first trailer type, the second camera (second camera 50 or CHMSL camera 50: Figs. 1-2) being positioned within a truck bed (bed 25 of the truck: Fig. 1) of the vehicle (12), directed toward a center of the truck bed (bed 25 of the truck: Fig. 1), and capturing image data of a first area within the bed (bed 25 of the truck: Fig. 1), including the center of the bed (bed 25 of the truck: Fig. 1) ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 3, Raeis disclose the system according to claim 1, and further on Raeis also discloses: wherein: the controller further attempts to identify the trailer hitch positioned within the area toward the center of the truck bed within the image data from the first camera; and responsive to the controller identifying the trailer hitch ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]): prompting a user selection of the image data from the second camera to be presented on the display ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and only causing the at least the portion of the image data from the second camera to be presented on the display, in response to the user selecting the image data from the second camera for presentation on the display ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 4, Raeis disclose the system according to claim 3, and further on Raeis also discloses: wherein the controller prompts the user to select the image data from the second camera by presenting a selection option on the display ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 5, Raeis disclose the system according to claim 1, and further on Raeis also discloses: wherein, prior to causing at least the portion of the image data from the second camera to be presented on the display within the vehicle in response to determining that the trailer is of the first trailer type, the controller identifies an obstruction of a portion of the image data from the first camera corresponding with the area to the center of the truck bed ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 6, Raeis disclose the system according to claim 1, and further on Raeis also discloses: a third camera positioned within the truck bed of the vehicle, directed toward the center of the truck bed, and capturing image data of a second area within the truck bed, including the center of the truck bed ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]), wherein: the controller further causes at least a portion of the image data from the third camera to be presented on the display within the vehicle, in response to determining that the trailer is of the first trailer type ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 7, Raeis disclose the system according to claim 6, and further on Raeis also discloses: wherein the controller selectively causes at least the portion of the image data from the second camera or the third camera to be presented on the display within the vehicle, in response to determining that the trailer is of the first trailer type, upon a further determination that one of the first area or the second area is a preferred area ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 9, Raeis disclose the system according to claim 7, and further on Raeis also discloses: wherein the controller determines that the one of the first area or the second area is the preferred view based on receipt of one of GPS data or RFID data associated with a location of the vehicle relative to the trailer ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 10, Raeis disclose the system according to claim 7, and further on Raeis also discloses: wherein the controller determines that the one of the first area or the second area is the preferred area based on a user selection prompted by the controller in response to determining that the trailer is of the first trailer type ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 11, Raeis disclose the system according to claim 1, and further on Raeis also discloses: wherein: the second camera is included in a camera assembly that includes a shutter alternately covering and uncovering the first camera with respect to the truck bed and an actuator configured for moving the shutter ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and the controller is in communication with the actuator to cause selective covering and uncovering of the second camera based on a use state of the second camera ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 12, Raeis disclose the system according to claim 1, and further on Raeis also discloses: a light source directed toward the truck bed including the area toward the center of the truck bed ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]), wherein: the controller is in communication with the light source and causes selective illumination of the light source to enhance visibility of the coupler in the image data from the second camera ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 14, Raeis disclose the vehicle according to claim 13, and further on Raeis also discloses: a vehicle steering system; and a vehicle brake system ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); wherein the controller further determines that the at least one of the coupler and the trailer is within a target position area and outputs a steering signal to the vehicle steering system to cause the vehicle to steer to laterally align a hitch of the vehicle with the coupler and a brake signal to the vehicle brake system to cause the vehicle to stop with the hitch in a longitudinally aligned position with respect to the coupler ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 15, Raeis disclose the vehicle according to claim 13, and further on Raeis also discloses: wherein: the controller further attempts to identify a hitch positioned within the area toward the center of the truck bed within the image data from the first camera ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and responsive to the controller identifying the hitch ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]): prompting the user to select the image data from the second camera to be presented on the display ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and only causing the at least a portion of the image data from the second camera to be presented on the display, in response to the user selecting the image data from the second camera for presentation on the display ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 16, Raeis disclose the vehicle according to claim 13, and further on Raeis also discloses: wherein: the imaging system further includes a third camera positioned within the truck bed of the vehicle, directed toward the center of the truck bed, and capturing image data of a second area within the truck bed, including the center of the truck bed ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]), wherein: the controller further causes at least a portion of the image data from the third camera to be presented on the display within the vehicle, in response to determining that the trailer is of the first trailer type ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 17, Raeis disclose the vehicle according to claim 16, and further on Raeis also discloses: wherein the controller selectively causes at the least a portion of the image data from the second camera or the third camera to be presented on the display within the vehicle, in response to determining that the trailer is of the first trailer type, upon a further determination that one of the first area or the second area is a preferred area ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 18, Raeis disclose the vehicle according to claim 13, and further on Raeis also discloses: wherein: the second camera is included in a camera assembly that includes a shutter alternately covering and uncovering the second camera with respect to the truck bed and an actuator configured for moving the shutter ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]); and the controller is in communication with the actuator to cause selective covering and uncovering of the second camera based on a use state of the second camera ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Regarding claim 19, Raeis disclose the vehicle according to claim 13, and further on Raeis also discloses: a light source directed toward the truck bed including the area toward the center of the truck bed ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]), wherein: the controller is in communication with the light source and causes selective illumination of the light source to enhance visibility of the coupler in the image data from the second camera ([0038, 0040, 0042-0043, 0045-0048, 0050-0056, 0063, 0065-0073]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over (Raeis – US 2022/0410804 A1), in view of (Nagasamy – US 11,270,164 B1), further in view of (EL-SAWAH - DE 102021128907 A1). Regarding claim 8, Raeis disclose the system according to claim 7. But Raeis does not explicitly and/or specifically meet the following limitations: (A) wherein the controller determines that the one of the first area or the second area is the preferred area based on an evaluation completed using a neural network. However, regarding limitation (A) above, Nagasamy discloses/teaches the following: A camera sensor 216, typically a video camera, can be included in a vehicle 101. The camera sensor 216 can be oriented to provide a field of view that includes a view of a trailer 200 including the trailer coupler 204, the ball mount 208 attached to the vehicle 101, and an environment on either side of the trailer 200. In some examples a second camera sensor 216 can be included to acquire images including more of the environment on both sides of the trailer 200 and a third camera sensor 216 can be included to acquire images from the back of the trailer 200. A computer 105 can determine, based on images 300 acquired by the camera sensor 216 of a target location for the trailer 200 such as a location of a parking spot, dock, or ramp, e.g., a location for parking and/or loading or unloading a trailer 200 and a trailer angle corresponding to the location of the trailer with respect to the vehicle 110. A target location can be determined by processing the image 300 with a deep neural network, for example. Based on a determined target location and a determined trailer angle 210, a computer 105 can determine a vehicle path upon which to operate the vehicle 101 that will cause the attached trailer 200 to turn in the appropriate direction at the appropriate rate to position the trailer at the target location. For example, the trailer angle can be used to determine a direction in which to reverse the vehicle 110 to move the trailer to a desired location. As is known, reversing a vehicle 101 with a trailer 200 attached will, when the vehicle is turning, cause the trailer to turn in a direction opposite to the direction in which the vehicle 101 is turning. Because of this the vehicle path determined by computer 105 to move the trailer 200 into the target location can require both forward and reverse motion of the vehicle 101, for example (Column 7, Lines 43-67). Deep neural network 400 is trained by processing a dataset that includes a large number (>1000) of training images that include a plurality of trailer 200 types at a plurality of trailer angles 210 in a plurality of environmental conditions. Each image in the dataset has corresponding ground truth data that specifies the trailer angle 210 of the trailer 200 in the image. Ground truth data is data regarding an input image 402 that is determined by a process independent from the deep neural network 400. Ground truth data is deemed to represent a measurement of the real world. For example, a ground truth trailer angle 210 can be estimated by manual inspection of the image, i.e., estimating a trailer angle 210 in image data using instruments including rulers and protractors on image hard copies, for example. In other examples, a ground truth trailer angle 210 can be estimated by measuring the trailer angle 210 in the real-world using instruments such a rulers and protractors on the real-world vehicle 101 and trailer 200 being imaged by the camera sensor 216. In training, a trailer angle 210 from processing an input image 402 determined by deep neural network 400 is backpropagated and compared to the ground truth trailer angle 210 corresponding to the input image 402. Backpropagation can compute a loss function based on the trailer angle 210 and corresponding ground truth trailer angle 210. A loss function is a mathematical function that maps a value such as a trailer angle 210 into a real number that corresponds to a cost. In this example the cost can be determined as a difference in degrees between the determined trailer angle 210 and the trailer angle 210 in the corresponding ground truth data. The loss function determines how closely the trailer angle 210 matches the angle in the ground truth data and is used to adjust the parameters or weights that control the deep neural network (Column 13, Lines 27-59). Further on, regarding limitation (A) above, EL-SAWAH (Paragraphs [0025, 0032-0045]) discloses/teaches the following: In various instances, acquisition of coupler 16 position data as discussed herein tert, the recording and processing of image data. To identify that the image data depicts the trailer 18, and more specifically the coupler position 24 of the coupler 16, the system 10 may process the image data with a trailer detection model. The trailer detection model may be implemented as a trained model consisting of algorithms and processing steps that evaluate the image data and determine whether an imaged object or features correspond to a trailer type for which the trailer detection model is trained to detect that the coupler position is 24 can be precisely identified. Accordingly, the disclosure provides for detection of a trailer type related to that depicted in the image data so that the system 10 can maneuver the vehicle to connect to the coupler location 24. To enhance the operability of the system 10 for recognizing and detecting the trailer 18 , the disclosure provides an interactive procedure whereby the user U can interact with the vehicle 12 and/or a remote device 30 to capture image data of the trailer 18 . Accordingly, when the system 10 is unable to identify the trailer 18 based on the pre-configured software provided with the vehicle 12, the disclosure provides a method and system for training the system 10 to identify the trailer 18 in the image data to accurately detect tag 18 and coupler position 24 compatibility of coupler 16. As with reference to FIG 5 Discussed in more detail, the systems and methods provided may utilize a neural network to improve the robustness and accuracy of the system 10 to identify new or previously untrained trailer types. The neural network may be configured to learn how to accurately detect the previously unidentified tags or variations of known tag types that cause the preprogrammed or factory configured detection models to be unable to identify a tag. In this way, the disclosure provides an improved method for training the system 10 to identify new trailer types that have not been previously trained, or to tune the factory-supplied trained models or original software to increase the robustness of detection of a trailer 18 type to better identify the coupler position 24. These and other aspects of the disclosure are described in more detail in the following description, particularly with respect to FIG 5-9. As in 5 Illustrated, the disclosed systems and methods may utilize a neural network 116 including a plurality of neurons 118 to improve the robustness and accuracy of the trailer detection model processed by the system 10. In general, the disclosure contemplates image data 112 being collected by the user U on-site via a guided process configured to capture the image data 112 or training data required to modify and enhance the training of the trailer identification model. In this way, the system 10 can be trained, via a user-initiated procedure, so that the trailer detection model accurately detects a particular trailer that is not detected by the existing iteration of the trailer detection model. Such a procedure can reduce customer reliance on manufacturer-driven modifications to the trailer detection model and result in significantly improved system 10 operation and user satisfaction. Once the image data 112 is received by the neural network 116, a deep learning procedure may be implemented to regress or estimate the locations and proportions of the trailer 18 and the coupler position 24 in the image data 112 or frames. For example, the neural network 116 can be implemented as a deep convolutional network. The architecture of the neural network 116 can be a variety of convolutional networks followed by activation functions. To help avoid overfitting, dropout layers and other regularization techniques can be implemented. In an exemplary embodiment, fully connected layers at the end of the neural network 116 are responsible for identifying and outputting the coupler position 24 and heading of the trailer 18. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the control system of Raeis incorporating using a neural network as taught by Nagasamy and EL-SAWAH to enhance the operability of the system for recognizing and detecting the trailer. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ruben Picon-Feliciano whose telephone number is (571)-272-4938. The examiner can normally be reached on Monday-Thursday within 11:30 am-7:30 pm ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lindsay M. Low can be reached on (571)272-1196. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RUBEN PICON-FELICIANO/Examiner, Art Unit 3747 /GRANT MOUBRY/Primary Examiner, Art Unit 3747
Read full office action

Prosecution Timeline

Apr 23, 2024
Application Filed
Jul 12, 2025
Non-Final Rejection — §102, §103
Oct 15, 2025
Response Filed
Jan 24, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601670
CONTROLLING A VISCOSITY OF FUEL IN A FUEL CONTROL SYSTEM WITH A VIBRATORY METER
2y 5m to grant Granted Apr 14, 2026
Patent 12594915
BRAKE FORCE DISTRIBUTION DEVICE FOR VEHICLE AND METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12583384
SYSTEM AND METHOD FOR CONTROLLING A VEHICLE CONDITION CHECK LIGHT USING A DWL MODE
2y 5m to grant Granted Mar 24, 2026
Patent 12583423
METHOD FOR DRIVE CONTROL
2y 5m to grant Granted Mar 24, 2026
Patent 12576901
SYSTEM AND METHOD FOR HAPTIC CALIBRATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
82%
With Interview (+13.3%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 708 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month