Prosecution Insights
Last updated: April 19, 2026
Application No. 18/294,816

A METHOD FOR DEPICTING THE REAR SURROUNDINGS OF A MOBILE PLATFORM COUPLED TO A TRAILER

Final Rejection §103
Filed
Feb 02, 2024
Examiner
TSENG, CHENG YUAN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
703 granted / 835 resolved
+22.2% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
30 currently pending
Career history
865
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
28.1%
-11.9% vs TC avg
§102
39.1%
-0.9% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 835 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 15-19 and 22-27 are rejected under 35 U.S.C. 103 as being unpatentable over Greenwood (US 11,553,153) in view of Scharfenberger (US 2023/0,342,894). Referring to claims 15, Greenwood discloses a method for depicting rear surroundings (fig. 1, FOV1/FOV2) of a mobile platform (fig. 1, vehicle V) coupled to a trailer (fig. 1, trailer T), wherein the mobile platform includes a first rearward-facing camera (fig. 1, camera C1), the method comprising the steps: providing a first rearward image (fig. 3A, image IMG1) from the first rearward-facing camera; providing a second rearward image (fig. 3B, image IMG2) generated by a second rearward-facing camera (fig. 1, camera C2); determining a trailer image region (fig. 3A, trailer T image region) in the first rearward image in which a portion of the surroundings is obstructed by the coupled trailer (fig. 3A, trailer T obstructing rearview); and replacing a portion of the trailer image region (fig. 3, trailer T image region) in the first image with a partial image region (fig. 3C, non-overlay region of trailer T in image IMG3) of the second rearward image to depict the rear surroundings of the mobile platform; wherein the trailer image region (fig. 3C, trailer T image region) is replaced with a corresponding subregion (fig. 3C, overlay region of trailer T in IMG3 from IMG2) of the second rearward image to depict the rear surroundings of the mobile platform (fig. 3C, IMG3). Scharfenberger discloses: wherein the image region is determined using a trained machine learning system (para.0049, machine learning), wherein the trained machine learning system is a trained neural network (fig. 5, convolutional neural network CNN 10/11/12) for semantic segmentation (para.0078, semantic segmentation) of the first image. Greenwood and Scharfenberger are analogous art because they are from the same field of endeavor in automotive vehicle camera systems. At the time of the filing, it would have been obvious to a person of ordinary skill in the art, having the teaching of Greenwood and Scharfenberger before him or her to modify the semi-transparent overlay of image IMG1 on top of image IMG2 of Greenwood to include the neural network image semantic segmentation of Scharfenberger, thereafter the semi-transparent overlay portion of image is formed with neural network. The suggestion and/or motivation for doing so would be obtaining the advantage of better performance (para.0010) as suggested by Scharfenberger. Therefore, it would have been obvious to combine Greenwood with Scharfenberger to obtain the invention as specified in the application claims. Referring to claim 22, Greenwood discloses a method for segmenting objects (fig. 3C, overlapping IMG1/IMG2) of a digital first rearward image (fig. 3A, IMG1) of rear surroundings of a mobile platform (fig. 1, vehicle V) with a plurality training cycles (fig. 9, generate composite image IMG7; 14:56-58), wherein each training cycle comprises following steps: providing a digital first rearward image (fig. 3A, image IMG1) of rear surroundings of a mobile platform (fig. 1, vehicle V) including a trailer (fig. 1, trailer T) coupled to the mobile platform; providing a reference image (fig. 3B, image IMG2; fig. 6) associated with the digital first rearward image, wherein the trailer is labeled (fig. 6, image IMG2 with semi-transparent overlay of image IMG1) in the reference image, provide the digital first rearward image as an input signal (fig. 2, signal S1); wherein the trailer image region (fig. 3C, trailer T image region) is replaced with a corresponding subregion (fig. 3C, overlay region of trailer T in IMG3 from IMG2) of the second rearward image to depict the rear surroundings of the mobile platform (fig. 3C, IMG3). Scharfenberger discloses a method for generating a trained neural network (fig. 5, convolutional neural network CNN 10/11/12) for semantically segmenting objects (para.0078, semantic segmentation), providing an input signal (fig. 3, input image) to the neural network (fig. 3, CNN1); and adapting the neural network to minimize a deviation (para.0070, deviations in estimations) of a semantic segmentation (para.0078, semantic segmentation) from the associated reference image (fig. 3, input images) during the semantic segmentation in the digital first image, wherein the image region is determined using a trained machine learning system (para.0049, machine learning), wherein the trained machine learning system is a trained neural network (fig. 5, convolutional neural network CNN 10/11/12) for semantic segmentation (para.0078, semantic segmentation) of the first image. (See TSM analysis in claim 15 above). Referring to claims 24-27, Greenwood discloses a system (fig. 1, rearview display system 1) for depicting rear surroundings (fig. 1, field of view FOV2) of a mobile platform (fig. 1, vehicle V) coupled to a trailer (fig. 1, trailer T), comprising: a first rearward-facing camera (fig. 1, camera C1); a second rearward-facing camera (fig. 1, camera C2; fig. 5, camera C3/C4); and a data processing device (fig. 1, central processing unit 3) to generate a depiction (fig. 4, image IMG4) of the rear surroundings of the mobile platform including: a first input (fig. 2, camera C1 signal S1) for signals from the first rearward-facing camera, a second input (fig. 2, camera C2 signal via antenna 19/11) for signals from the second rearward-facing camera, a computing unit (fig. 2, central processing unit 3) or system-on-chip, and an output (fig. 2, display screen 15; fig. 4) for providing the depiction of the rear surroundings; wherein the computing unit or the system-on-chip is to: provide a first rearward image (fig. 3A, image IMG1) from the first rearward-facing camera, provide a second rearward image (fig. 3B, image IMG2) generated by a second rearward-facing camera (fig. 1, camera C2), determine a trailer image region (fig. 6, image IMG1 overlay region) in the first rearward image in which a portion (fig. 6, image IMG1 obscured by trailer T) of the surroundings is obscured by the coupled trailer, and replace a portion of the trailer image region (fig. 6, replace trailer T with image IMG2, replace trailer T with image IMG1L) in the first image with a partial image region of the second rearward image to depict the rear surroundings of the mobile platform (fig. 6, image IMG2 replace trailer T region); wherein the trailer image region is replaced with a corresponding subregion (fig. 3C, overlay region of trailer T in IMG3 from IMG2) of the second rearward image to depict the rear surroundings of the mobile platform (fig. 3C, IMG3). Scharfenberger discloses: wherein the image region is determined using a trained machine learning system (para.0049, machine learning), wherein the trained machine learning system is a trained neural network (fig. 5, convolutional neural network CNN 10/11/12) for semantic segmentation (para.0078, semantic segmentation) of the first image. (See TSM analysis in claim 15 above). As to claim 16, Greenwood discloses the method of claim 15, wherein the trailer includes the second rearward-facing camera (fig. 1, camera C2 on trailer T). As to claim 17, Greenwood discloses the method according to claim 15, wherein the mobile platform includes the second rearward-facing camera with a rearward-facing viewing angle (fig. 5, field of view FOV1) which is different from that of the first rearward-facing camera (fig. 5, FOV2). As to claim 18, Greenwood discloses the method of claim 17, wherein the second rearward-facing camera is a side camera (fig. 5, camera C3/C4) of the mobile platform. As to claim 19, Greenwood discloses the method of claim 18, wherein the second rearward-facing camera is on an exterior mirror (fig. 5, camera C3/C4; 12:23-32, side mirrors) of the mobile platform to visualize sides of the trailer when making a turn. As to claim 23, Greenwood discloses the method of claim 15, comprising: based on the depiction of the rear surroundings of the mobile platform, providing: a warning signal for warning a vehicle occupant (fig. 4, parking guidance P; 11:9-19) based on an error signal (11:20-12:22, vehicle parking guidance). Conclusion Applicant’s amendment necessitated the new grounds of rejection presented in this Office action. Accordingly, this action is made final. See MPEP §706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire in THREE MONTHS from the mailing date of this action. In the event a first reply is filled within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date of the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136 (a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than six months from the date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Cheng-Yuan Tseng whose telephone number is (571)272-9772, and fax number is (571)273-9772. The examiner can normally be reached on Monday through Friday from 09:00 to 17:30 Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866)217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800)786-9199 (IN USA OR CANADA) or (571)272-1000. /CHENG YUAN TSENG/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Dec 01, 2025
Non-Final Rejection — §103
Mar 03, 2026
Response Filed
Mar 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602844
Graphics Processor
2y 5m to grant Granted Apr 14, 2026
Patent 12586285
METHODS AND SYSTEMS FOR MARKERLESS FACIAL MOTION CAPTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12579415
Area-Efficient Convolutional Block
2y 5m to grant Granted Mar 17, 2026
Patent 12572355
MODULAR ADDITION INSTRUCTION
2y 5m to grant Granted Mar 10, 2026
Patent 12567173
Infant 2D Pose Estimation and Posture Detection System
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+15.7%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 835 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month