Prosecution Insights
Last updated: April 19, 2026
Application No. 18/240,781

SURROUND VIEW USING REAR VIEW CAMERA

Non-Final OA §103
Filed
Aug 31, 2023
Examiner
LEE, JIMMY S
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Texas Instruments Incorporated
OA Round
3 (Non-Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
170 granted / 302 resolved
-1.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 302 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 7 November 2025 has been entered. Response to Amendment The amendment to the claim 8 filed on 7 November 2025 does not comply with the requirements of 37 CFR 1.121(c) because claim 8 incorrectly disclaims the claim’s status, i.e., claim 8 is indicated as (Previously Presented) while having been amended and should be more accurately indicated as (Currently Amended). Amendments to the claims filed on or after July 30, 2003 must comply with 37 CFR 1.121(c) which states: (c) Claims. Amendments to a claim must be made by rewriting the entire claim with all changes (e.g., additions and deletions) as indicated in this subsection, except when the claim is being canceled. Each amendment document that includes a change to an existing claim, cancellation of an existing claim or addition of a new claim, must include a complete listing of all claims ever presented, including the text of all pending and withdrawn claims, in the application. The claim listing, including the text of the claims, in the amendment document will serve to replace all prior versions of the claims, in the application. In the claim listing, the status of every claim must be indicated after its claim number by using one of the following identifiers in a parenthetical expression: (Original), (Currently amended), (Canceled), (Withdrawn), (Previously presented), (New), and (Not entered). (1) Claim listing. All of the claims presented in a claim listing shall be presented in ascending numerical order. Consecutive claims having the same status of “canceled” or “not entered” may be aggregated into one statement (e.g., Claims 1–5 (canceled)). The claim listing shall commence on a separate sheet of the amendment document and the sheet(s) that contain the text of any part of the claims shall not contain any other part of the amendment. (2) When claim text with markings is required. All claims being currently amended in an amendment paper shall be presented in the claim listing, indicate a status of “currently amended,” and be submitted with markings to indicate the changes that have been made relative to the immediate prior version of the claims. The text of any added subject matter must be shown by underlining the added text. The text of any deleted matter must be shown by strike-through except that double brackets placed before and after the deleted characters may be used to show deletion of five or fewer consecutive characters. The text of any deleted subject matter must be shown by being placed within double brackets if strike-through cannot be easily perceived. Only claims having the status of “currently amended,” or “withdrawn” if also being amended, shall include markings. If a withdrawn claim is currently amended, its status in the claim listing may be identified as “withdrawn—currently amended.” (3) When claim text in clean version is required. The text of all pending claims not being currently amended shall be presented in the claim listing in clean version, i.e., without any markings in the presentation of text. The presentation of a clean version of any claim having the status of “original,” “withdrawn” or “previously presented” will constitute an assertion that it has not been changed relative to the immediate prior version, except to omit markings that may have been present in the immediate prior version of the claims of the status of “withdrawn” or “previously presented.” Any claim added by amendment must be indicated with the status of “new” and presented in clean version, i.e., without any underlining. (4) When claim text shall not be presented; canceling a claim. (i) No claim text shall be presented for any claim in the claim listing with the status of “canceled” or “not entered.” (ii) Cancellation of a claim shall be effected by an instruction to cancel a particular claim number. Identifying the status of a claim in the claim listing as “canceled” will constitute an instruction to cancel the claim. (5) Reinstatement of previously canceled claim. A claim which was previously canceled may be reinstated only by adding the claim as a “new” claim with a new claim number. Since the reply filed on 7 November 2025 appears to be bona fide and occurs only in this one instance with claim 8, it is objected to and will be addressed as a claim objection. Claim Objections Claim 8 objected to because of the following informalities: The claim is indicated as (Previously Presented) while having been amended, meaning the indication is inaccurate to the true claim status. Upon review, the examiner clearly can consider the amendment to claim 8 as the amendments themselves are accurately indicated by being underlined. For the purposes of examination, the status of claim 8 is considered as (Currently Amended). Appropriate correction is required. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 8, and 16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1,4-8,11,13-15 rejected under 35 U.S.C. 103 as being unpatentable over Sakuragi; Tomoki (US 20190281229 A1) in view of Yamashita; Yutaro et al. (US 20220084257 A1) in view of INUI; Yoji et al. (US 20180253106 A1) Regarding claim 1, Sakuragi teaches, A circuit (¶45, and Fig. 1, “controller 41” formed of a central processing unit (CPU) including depicted in Fig. 1) comprising: a first processor core, (¶60 and Fig. 1, “cut-out processor 452” included in bird's-eye view video generator 45 depicted in Fig. 1) and a second processor core, (¶60 and Fig. 1, “synthesis processor 453” included in bird's-eye view video generator 45 depicted in Fig. 1) in which: the first processor core (¶68,60, and Fig. 1, “cut-out processor 452” included in bird's-eye view video generator 45 depicted in Fig. 1) is configured to: provide a rear view image processing path (¶68 and Fig. 1, “cut-out processor 452 performs cut-out processing of cutting videos of predetermined areas” which includes cut-out processor 452 cutting “a backward cut area from the surroundings video data out of the rear camera 12”) that is configured to obtain a source rear view image; (¶68, “a backward cut area” from the surroundings video data out of the rear camera 12) propagate one of the source rear view image (¶68, cut-out processor 452 output “backward cut area from the surroundings video data out of the rear camera 12”) and the copied rear view image, as a propagated rear view image, (¶68, cut-out processor 452 output “backward cut area from the surroundings video data out of the rear camera 12” to the synthesis processor 453) through the rear view image processing path; (¶68 and Fig. 1, cut-out processor 452, as part of bird’s-eye view video generator 45 depicted in fig. 1, outputs video image data of the videos obtained including “backward cut area from the surroundings video data out of the rear camera 12” to the “synthesis processor 453”) and the second processor core (¶71,60, and Fig. 1, “synthesis processor 453” included in bird's-eye view video generator 45 depicted in Fig. 1) is configured to: provide a surround view image processing path (¶71,68, and Fig. 1, synthesis processor 453 “generates the bird's-eye view video 100” by “synthesizing the videos that are cut out by the cut-out processor 452” such as the backward cut area from the surroundings video data) and provide a surround view image (¶71 and 68, “synthesis processor 453 generates the bird's-eye view video 100” by synthesizing “the videos that are cut out by the cut-out processor 452”) in the surround view image processing path (¶71 and 68, “synthesis processor 453”) based in part on the transferred rear view image, (¶71 and 68, videos that are cut out by the cut-out processor 452 including “backward cut area from the surroundings video data out of the rear camera 12”) But does not explicitly teach, copy the source rear view image in the rear view image processing path to generate a copied rear view image; provide a surround view image processing path after the propagated rear view image is propagated through the rear view image processing path; receive the other of the source rear view image and the copied rear view image, as transferred rear view image, from the rear view image processing path; wherein the rear view image is generated and copied before the surround view image processing path is provided. However, Yamashita teaches additionally, copy the source rear view image (¶67 and fig. 1, splitter 21 “supplies an image 1 captured by the camera 10” to two processing units as depicted in fig. 1) in the rear view image processing path (¶67,60, and fig. 2, “camera 10” arranged at “a rear of the vehicle body” to capture the rear of the vehicle body 60” depicted in fig. 2) to generate a copied rear view image; (¶67, splitter 21 supplies image 1 to CMS signal processing unit 22 and “rear view signal processing unit 23”) provide a surround view image processing path (¶73 and fig. 1, “image superimposing unit 24” depicted in fig. 1) after the propagated rear view image (¶73 and fig. 1, image superimposing unit 24 receives “rear view original image 3” generated by the rear view signal processing unit 23) is propagated through the rear view image processing path; (¶73,67,60, and fig. 1-2, “rear view original image 3” captured by “camera 10” of a rear of the vehicle body received by “splitter 21” supplying image captured to “rear view signal processing unit 23” then to “image superimposing unit 24”, depicted in fig. 1) receive the other of the source rear view image and the copied rear view image, (¶67 and fig. 1, “CMS signal processing unit 22” receives an image 1 captured by the camera 10 supplied from “splitter 21” separate from image 1 supplied to rear view signal processing unit 23) as transferred rear view image, (¶67,60, and fig. 2, image 1 of the “rear of the vehicle body” captured by the camera 10 supplied to “CMS signal processing unit 22”) from the rear view image processing path; (¶67,60, and fig. 2, “camera 10” arranged at “a rear of the vehicle body” to capture the rear of the vehicle body 60” depicted in fig. 2) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita which splits received images to two different processing units. This allows for separate viewings that can provide different presentations such as a distortion corrected image and a superimposed image. However, Inui teaches additionally, wherein the rear view image (¶102-109 and fig. 9, acquired “rear image” at S301 as disclosed in fig. 9) is generated and copied (¶102-109 and fig. 9, display controller 104 at S303 “sets a virtual projection plane and a virtual viewpoint” for rear image and at S304 “projects the rear image onto the virtual projection plane” disclosed in fig. 9) before the surround view image processing path is provided. (¶102-109 and fig. 9, display controller 104 at S305 “converts the image projected on the virtual projection plane into an image seen” from a local “bird’s-eye view image” which follows S303 and S304 as disclosed in fig. 9) Inui discloses view modes which can provide a process which displays rear view images, or both rear view images and bird-view images. The modes include the acquisition and projection of a rear image for display, and the conversion of the projected rear image to a projection plane for a bird’s-eye view. This projection for a bird’s-view image is separate from and subsequent to the projection of the rear view image individually. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita with the rear view mode of Inui which processes the rear view image before processing of a bird’s-eye view. This allows a display generation that focuses on a particular display position capable of following the movement of a display target. Regarding claim 4, Sakuragi with Yamashita with Inui teaches the limitations of claim 1, Sakuragi teaches additionally, a display interface, (¶45,71, and Fig. 1, “display controller 48” included in controller 41 depicted in Fig. 1) in which: the surround view image processing path (¶71 and Fig. 1, “synthesis processor 453”, of bird’s-eye view video generator 45 depicted in Fig. 1, “outputs the generated bird's-eye view video 100”) is configured to transfer the surround view image (¶71, “outputs the generated bird's-eye view video 100”) to the display interface. (¶71, “synthesis processor 453 outputs the generated bird's-eye view video 100” to the “display controller 48”) Yamashita teaches additionally, the rear view image processing path (¶73,67,60,115, and fig. 1-2, “rear view original image 3” captured by “camera 10” of a rear of the vehicle body received by “splitter 21” supplying image captured to “rear view signal processing unit 23” then to “image superimposing unit 24”, depicted in fig. 1) is configured to transfer the propagated rear view image to the display interface; (¶115 and fig. 1, rear view signal processing unit 23 outputs “virtual image superimposed with rear view image” to the “rear view display unit 40” depicted in fig. 1) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita with the rear view mode of Inui which splits received images to two different processing units. This allows for separate viewings that can provide different presentations such as a distortion corrected image and a superimposed image. Regarding claim 5, Sakuragi with Yamashita with Inui teaches the limitations of claim 1, Sakuragi teaches additionally, a camera interface, (¶45 and fig. 1, controller 41 includes “video data acquisition unit 42”) in which: the rear view image processing path (¶68 and Fig. 1, “cut-out processor 452 performs cut-out processing of cutting videos of predetermined areas” which includes cut-out processor 452 cutting “a backward cut area from the surroundings video data out of the rear camera 12”) is configured to: receive a rear view video signal (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the rear camera 12” acquired by “video data acquisition unit 42” depicted in Fig. 1) from the camera interface; (¶68,61, and Fig. 1, “video data acquisition unit 42” depicted in Fig. 1) and generate the source rear view image (¶68 and Fig. 1, “cut-out processor 452 cuts a backward cut area from the surroundings video data out of the rear camera 12”) based on the rear view video signal. (¶68 and Fig. 1, a backward cut area from the “surroundings video data out of the rear camera 12”) Regarding claim 6, Sakuragi with Yamashita with Inui teaches the limitations of claim 5, Sakuragi teaches additionally, first processor core (¶68 and Fig. 1, “cut-out processor 452 performs cut-out processing of cutting videos of predetermined areas” which includes cut-out processor 452 cutting “a backward cut area from the surroundings video data out of the rear camera 12”) is configured to: receive a first side view video signal, (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the left-side camera 13” acquired by “video data acquisition unit 42” depicted in Fig. 1) a second side view video signal, (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the right-side camera 14” acquired by “video data acquisition unit 42” depicted in Fig. 1) and a front view video signal (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the front camera 11” acquired by “video data acquisition unit 42” depicted in Fig. 1) from the camera interface; (¶68,61, and Fig. 1, “video data acquisition unit 42” depicted in Fig. 1) generate a first side view image, (¶68 and Fig. 1, “cut-out processor 452 cuts a left-side cut area out of the surroundings video data from the left-side camera 13”) a second side view image, (¶68 and Fig. 1, “cut-out processor 452 cuts a right-side cut area out of the surroundings video data from the right-side camera 14”) and a front view image (¶68 and Fig. 1, “cut-out processor 452 cuts a forward cut area out of the surroundings video data from the front camera 11”) based on the first side view video signal, (¶68 and Fig. 1, a left-side cut area from the “surroundings video data out of the left-side camera 13”) the second side view video signal, (¶68 and Fig. 1, a right-side cut area from the “surroundings video data out of the right-side camera 14”) and the front view video signal; (¶68 and Fig. 1, a forward cut area from the “surroundings video data out of the forward camera 11”) and transfer the first side view image, (¶68 and Fig. 1, “a left-side cut area” out of the left-side camera 13) the second side view image, (¶68 and Fig. 1, “a right-side cut area” out of the right-side camera 14) and the front view image (¶68 and Fig. 1, “a forward cut area” out of the forward camera 11) to the surround view image processing path. (¶68 and Fig. 1, “cut-out processor 452 outputs the video image data of the videos obtained by performing the cutting processing”, which includes “a left-side cut area”, “a right-side cut area”, and “a forward cut area”, to the “synthesis processor 453”) Regarding claim 7, Sakuragi with Yamashita with Inui teaches the limitations of claim 6, Sakuragi teaches additionally, surround view image processing path (¶71 and Fig. 1, synthesis processor 453 “generates the bird's-eye view video 100” by “synthesizing the videos that are cut out by the cut-out processor 452”) is configured to generate the surround view image (¶71 and 68 and Fig. 1, synthesis processor 453 generates “bird's-eye view video 100” by synthesizing the videos that are “cut out by the cut-out processor 452” which includes “a backward cut area”, “a left-side cut area”, “a right-side cut area”, and “a forward cut area”) based on the transferred rear view image, (¶68 and Fig. 1, “a backward cut area” out of the rear camera 12) the first side view image, (¶68 and Fig. 1, “a left-side cut area” out of the left-side camera 13) the second side view image, (¶68 and Fig. 1, “a right-side cut area” out of the right-side camera 14) and the front view image (¶68 and Fig. 1, “a forward cut area” out of the forward camera 11) received from the rear view image processing path. (¶68 and Fig. 1, “cut-out processor 452 outputs the video image data of the videos obtained by performing the cutting processing”, which includes “a backward cut area”, “a left-side cut area”, “a right-side cut area”, and “a forward cut area”, to the “synthesis processor 453”) Regarding claim 8, Sakuragi teaches, A method (Title, “BIRD'S-EYE VIEW VIDEO GENERATION METHOD”) comprising: establishing, by a video processing circuit (¶68,60,45 and Fig. 1, executed commands contained in the programs to “cut-out processor 452” performs cut-out processing of cutting videos in the “bird's-eye view video generator 45” depicted in Fig. 1) a rear view image processing path; (¶68 and Fig. 1, “cut-out processing of cutting videos of predetermined areas” which includes cut-out processor 452 cutting “a backward cut area from the surroundings video data out of the rear camera 12”) generating a source rear view image (¶68 and Fig. 1, “a backward cut area from the surroundings video data out of the rear camera 12”) in the rear view image processing path; (¶68 and Fig. 1, “cut-out processor 452” cuts a backward cut area from the surroundings video data out of the rear camera 12) propagating one of the source rear view image (¶68, cut-out processor 452 output “backward cut area from the surroundings video data out of the rear camera 12”) and the copied rear view image, as a propagated rear view image, (¶68, cut-out processor 452 output “backward cut area from the surroundings video data out of the rear camera 12” to the synthesis processor 453) through the rear view image processing path; (¶68 and Fig. 1, cut-out processor 452, as part of bird’s-eye view video generator 45 depicted in fig. 1, outputs video image data of the videos obtained including “backward cut area from the surroundings video data out of the rear camera 12” to the “synthesis processor 453”) establishing, by the video processing circuit, (¶71,60, and Fig. 1, “synthesis processor 453” included in bird's-eye view video generator 45 depicted in Fig. 1) a surround view image processing path (¶71,68, and Fig. 1, synthesis processor 453 “generates the bird's-eye view video 100” by “synthesizing the videos that are cut out by the cut-out processor 452” such as the backward cut area from the surroundings video data) and generating, by the video processing circuit, (¶71,60, and Fig. 1, “synthesis processor 453” included in bird's-eye view video generator 45 depicted in Fig. 1) a surround view image in the surround view image processing path, (¶71 and 68, “synthesis processor 453 generates the bird's-eye view video 100” by synthesizing “the videos that are cut out by the cut-out processor 452”) based in part on the transferred rear view image, (¶71 and 68, videos that are cut out by the cut-out processor 452 including “backward cut area from the surroundings video data out of the rear camera 12”) in the surround view image processing path, (¶71 and Fig. 1, synthesis processor 453 “generates the bird's-eye view video 100” by “synthesizing the videos that are cut out by the cut-out processor 452”) But does not explicitly teach, copying the source rear view image in the rear view image processing path to generate a copied rear view image; establishing, by the video processing circuit, a surround view image processing path after the propagated rear view image is propagated through the rear view image processing path; receiving, in the surround view image processing path, the other of the source rear view image and the copied rear view image, as a transferred rear view image, from the rear view image processing path; wherein the rear view image is generated and copied before the surround view image processing path is provided. However, Yamashita teaches additionally, copying the source rear view image (¶67 and fig. 1, splitter 21 “supplies an image 1 captured by the camera 10” to two processing units as depicted in fig. 1) in the rear view image processing path (¶67,60, and fig. 2, “camera 10” arranged at “a rear of the vehicle body” to capture the rear of the vehicle body 60” depicted in fig. 2) to generate a copied rear view image; (¶67, splitter 21 supplies image 1 to CMS signal processing unit 22 and “rear view signal processing unit 23”) establishing, by the video processing circuit, (¶67-68 and fig. 1, “CMS signal processing unit 22” as part of “image processing apparatus 20” depicted in fig. 1) a surround view image processing path (¶67-68 and fig. 1, “CMS signal processing unit 22” generates a CMS image) after the propagated rear view image (¶67-68 and fig. 1, CMS signal processing unit 22 after splitter 21 supplies “image 1 captured by the camera 10” also sent to “rear view signal processing unit 23”) is propagated through the rear view image processing path; (¶73,67,60, and fig. 1-2, “rear view original image 3” captured by “camera 10” of a rear of the vehicle body received by “splitter 21” supplying image captured to “rear view signal processing unit 23” then to “image superimposing unit 24”, depicted in fig. 1) receiving, in the surround view image processing path, (¶67 and fig. 1¸ ”CMS signal processing unit 22” of the image processing apparatus 20 depicted in fig. 1 “generates CMS image”) the other of the source rear view image and the copied rear view image, (¶67 and fig. 1, “CMS signal processing unit 22” of the image processing apparatus 20 receives an image 1 captured by the camera 10 supplied from “splitter 21” separate from image 1 supplied to rear view signal processing unit 23) as a transferred rear view image, (¶67,60, and fig. 2, image 1 of the “rear of the vehicle body” captured by the camera 10 supplied to “CMS signal processing unit 22”) from the rear view image processing path; (¶67,60, and fig. 2, “camera 10” arranged at “a rear of the vehicle body” to capture the rear of the vehicle body 60” depicted in fig. 2) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita which splits received images to two different processing units. This allows for separate viewings that can provide different presentations such as a distortion corrected image and a superimposed image. Inui teaches additionally, wherein the rear view image (¶102-109 and fig. 9, acquired “rear image” at S301 as disclosed in fig. 9) is generated and copied (¶102-109 and fig. 9, display controller 104 at S303 “sets a virtual projection plane and a virtual viewpoint” for rear image and at S304 “projects the rear image onto the virtual projection plane” disclosed in fig. 9) before the surround view image processing path is provided. (¶102-109 and fig. 9, display controller 104 at S305 “converts the image projected on the virtual projection plane into an image seen” from a local “bird’s-eye view image” which follows S303 and S304 as disclosed in fig. 9) Inui discloses view modes which can provide a process which displays rear view images, or both rear view images and bird-view images. The modes include the acquisition and projection of a rear image for display, and the conversion of the projected rear image to a projection plane for a bird’s-eye view. This projection for a bird’s-view image is separate from and subsequent to the projection of the rear view image individually. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita with the rear view mode of Inui which processes the rear view image before processing of a bird’s-eye view. This allows a display generation that focuses on a particular display position capable of following the movement of a display target. Regarding claim 11, Sakuragi with Yamashita with Inui teaches the limitations of claim 8, Yamashita teaches additionally, continuing to operate the rear view image processing path (¶68,71, and fig. 7, displaying the bird's-eye view video that includes “backward cut area from the surroundings video data out of the rear camera 12” as part of generated “bird’s-eye view video 100”) after the surround view image processing path is established. (¶79,89,68, and fig. 7, “processes performed by the bird's-eye view video generation device 40” which includes cycle of “displaying of bird’s-eye view video” at step S17 to “continue the process” that continually “generate and display bird’s-eye view video” at step S16 as depicted in fig. 7 includes “backward cut area from the surroundings video data out of the rear camera 12”) Regarding claim 13, Sakuragi with Yamashita with Inui teaches the limitations of claim 8, Sakuragi teaches additionally, the video processing circuit includes a first processor core, (¶68 and Fig. 1, “cut-out processor 452 performs cut-out processing of cutting videos of predetermined areas” which includes cut-out processor 452 cutting “a backward cut area from the surroundings video data out of the rear camera 12”) the method further comprising: receiving, by the first processor core, (¶68 and Fig. 1, “cut-out processor 452”) a rear view video signal, (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the rear camera 12” acquired by “video data acquisition unit 42” depicted in Fig. 1) a first side view video signal, (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the left-side camera 13” acquired by “video data acquisition unit 42” depicted in Fig. 1) a second side view video signal, (¶68,61, and Fig. 1, “cut-out processor 452” of bird’s-eye view video generator 45 cuts “surroundings video data out of the right-side camera 14” acquired by “video data acquisition unit 42” depicted in Fig. 1) and a front view video signal, (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the front camera 11” acquired by “video data acquisition unit 42” depicted in Fig. 1) Yamashita teaches additionally, wherein the rear view video signal (¶67 and 60, “image 1 captured by the camera 10” of the rear of the vehicle) is received in the rear view image processing path; (¶67 and 60, supplied “image 1 captured by the camera 10” to CMS signal processing unit 22 and “rear view signal processing unit 23”) and processing, by the rear view image processing path, (¶115-116 and fig. 1, “image superimposing unit 24 continues to output the virtual image” from the processed rear view signal processing unit 23 depicted in fig. 1) the rear view video signal (¶115-116, output the virtual image of “rear view image”) to produce the source rear view image. (¶115-116, generated “virtual image superimposed rear view image”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita with the rear view mode of Inui which splits received images to two different processing units. This allows for separate viewings that can provide different presentations such as a distortion corrected image and a superimposed image. Regarding claim 14, Sakuragi with Yamashita with Inui teaches the limitations of claim 13, Sakuragi teaches additionally, video processing circuit includes a second processor core, (¶71,60, and Fig. 1, “synthesis processor 453” included in “bird's-eye view video generator 45” depicted in Fig. 1) the method further comprising: processing, by the first processor core, (¶68 and Fig. 1, “cut-out processor 452 performs cut-out processing of cutting videos of predetermined areas” which includes cut-out processor 452 cutting “a backward cut area from the surroundings video data out of the rear camera 12”) the first side view video signal, (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the left-side camera 13” acquired by “video data acquisition unit 42” depicted in Fig. 1) the second side view video signal, (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the right-side camera 14” acquired by “video data acquisition unit 42” depicted in Fig. 1) and the front view video signal (¶68,61, and Fig. 1, “cut-out processor 452” of the bird’s-eye view video generator 45 cuts “surroundings video data out of the front camera 11” acquired by “video data acquisition unit 42” depicted in Fig. 1) to produce a first side view image, (¶68, “left-side cut area”) a second side view image, (¶68, “right-side cut area”) and a front view image; (¶68, “forward cut area”) and transferring the first side view image, (¶68, “left-side cut area”) the second side view image, (¶68, “right-side cut area”) and the front view image (¶68, “forward cut area”) to a surround view module of the second processor core (¶68, cut-out processor 452 outputs to “the synthesis processor 453”) in the surround view image processing path. (¶68, “cut-out processor 452 outputs” the left-side cut area, right-side cut area, and forward cut area “to the synthesis processor 453”) Regarding claim 15, dependent on claim 14, it is the method claim similar to circuit claim 7, dependent on claim 6. Refer to rejection of claim 7 to teach the limitations of claim 15. Claim(s) 2-3,9-10 rejected under 35 U.S.C. 103 as being unpatentable over Sakuragi; Tomoki (US 20190281229 A1) in view of Yamashita; Yutaro et al. (US 20220084257 A1) in view of INUI; Yoji et al. (US 20180253106 A1) in view of Wu; Chih-Huan et al. (US 20210344881 A1) Regarding claim 2, Sakuragi with Yamashita with Inui teaches the limitations of claim 1, But does not explicitly teach the additional limitations of claim 2, However, Wu teaches additionally, the first processor core (¶31 and fig. 2, “motion detection device 60” depicted in fig. 2 performs “step S200 and S202 are executed to startup the monitoring system”) is configured to provide the rear view image processing path (¶31 and 58, startup of “the event camera 1306”) in response to initialization (¶31, “startup” of the monitoring system) of the first processor core. (¶31 and fig. 2, “motion detection device 60” depicted in fig. 2) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita with the rear view mode of Inui with the detection method of Wu which has an executed startup. This allows for a monitoring system capable of managing the active performance of detectors and processors. Regarding claim 3, Sakuragi with Yamashita with Inui teaches the limitations of claim 1, But does not explicitly teach the additional limitations of claim 3, However, Wu teaches additionally, first processor core (¶58 and fig. 13, “SMD 1312” of digital processing circuit 1310 depicted in fig. 13) is configured to initialize (¶58, “SMD 1312 is awakened by a trigger signal” once the camera detects that a pixel value changes, while “external processor 1317” is only awakened when motion detection indicates that an actual motion occurs “sent from the digital processing circuit 1310”) faster than the second processor core. (¶58 and fig. 13, “external processor 1317” of backend system device 1315 depicted in fig. 13) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita with the rear view mode of Inui with the detection method of Wu which has an executed startup. This allows for a monitoring system capable of managing the active performance of detectors and processors. Regarding claim 9, Sakuragi with Yamashita with Inui teaches the limitations of claim 8, But does not explicitly teach the additional limitations of claim 8, However, Wu teaches additionally, video processing circuit (¶54,58, and fig. 13, “monitoring system 1300” depicted in fig. 13) includes a first processor core (¶54,58, and fig. 13, monitoring system 1300 with “SMD 1312” of digital processing circuit 1310 depicted in fig. 13) and a second processor core, (¶54,58, and fig. 13, monitoring system 1300 with “external processor 1317” of backend system device 1315 depicted in fig. 13) the method further comprising: establishing the rear view image processing path (¶58, SMD 1312 awakened to perform “motion detection” once detecting “pixel value changes”) in the first processor core (¶58, “SMD 1312”) in response to initialization (¶58, “SMD 1312 is awakened” once event camera 1306 “detects that a pixel value changes”) of the first processor core; (¶58, “SMD 1312”) and establishing the surround view image processing path (¶58, external processor 1317 awakened to perform “image processing and/or the video recording operation”) in response to initialization (¶58, “SMD 1312 is awakened by a trigger signal” once the camera detects that a pixel value changes, while “external processor 1317” is only awakened when motion detection indicates that an actual motion occurs “sent from the digital processing circuit 1310”) of the second processor core. (¶58 and fig. 13, “external processor 1317” of backend system device 1315 depicted in fig. 13) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita with the rear view mode of Inui with the detection method of Wu which has an executed startup. This allows for a monitoring system capable of managing the active performance of detectors and processors. Regarding claim 10, dependent on claim 9, it is the method claim similar to circuit claim 3, dependent on claim 1. Refer to rejection of claim 3 to teach the limitations of claim 10. Claim(s) 12 rejected under 35 U.S.C. 103 as being unpatentable over Sakuragi; Tomoki (US 20190281229 A1) in view of Yamashita; Yutaro et al. (US 20220084257 A1) in view of INUI; Yoji et al. (US 20180253106 A1) in view of Chinomi; Satoshi et al. (US 20060274147 A1) Regarding claim 12, Sakuragi with Yamashita with Inui teaches the limitations of claim 8, Sakuragi teaches additionally, providing the surround view image (¶71, “synthesis processor 453 outputs the generated bird's-eye view video 100”) to the display interface; (¶71, “synthesis processor 453 outputs the generated bird's-eye view video 100” to the “display controller 48”) and displaying the surround view image (¶77 and Fig. 1, causing the “display panel 30 to display the bird's-eye view video 100” depicted in Fig. 1) Yamashita teaches additionally, providing the propagated rear view image to a display interface; (¶116, “output the virtual image superimposed rear view image generated” to the “rear view display unit 40”) displaying the propagated rear view image (¶116, “rear view display unit 40” displays “output the virtual image superimposed rear view image”) on a first portion of a screen; (¶116 and 59, rear view display unit 40 being “a screen of a car navigation system”) but does not explicitly teach, displaying the surround view image on a second portion of the screen, in which the second portion does not overlap the first portion. However, Chinomi teaches additionally, providing the propagated rear view image (¶33,37, and Fig. 4, “rear direct image 110 picked up with the rear camera 1B”) to a display interface; (¶33,37, and fig. 4, “control unit 3” which controls the display unit 2 to display obtained combined image as depicted in Fig. 4, which includes the “rear direct image 110 picked up with the rear camera 1B”) providing the surround view image (¶33,37, and Fig. 4, “bird's-eye view image 100 prepared by the image-processing unit 4”) to the display interface; (¶33,37, and fig. 4, “control unit 3” which controls the display unit 2 to display obtained combined image as depicted in Fig. 4, which includes the “bird's-eye view image 100 prepared by the image-processing unit 4”) displaying the propagated rear view image (¶29,33,36-37, and fig. 4, stored and converted “the direct image 110 of the rear pickup region 11B” is displayed as depicted in fig. 4) on a first portion of a screen; (¶33,37, and fig. 4, “direct image 110 of the rear pickup region 11B” depicted on the right in fig. 4) and displaying the surround view image (¶29,33,36-37, and fig. 4, “bird’s-eye view image 100” prepared by the image-processing unit 4 “is displayed” as depicted in fig. 4) on a second portion of the screen, (¶33,37, and fig. 4, “bird’s-eye view image 100” depicted on the left in fig. 4) in which the second portion does not overlap the first portion. (¶33,36-37, and fig. 4, “bird's-eye view image 100 and the direct images 110 are simultaneously displayed” as depicted in fig. 4) Chinomi teaches the pickup of multiple direct images that are acquired by an image-processing unit which converts direct images and composes the bird’s-eye view images for simultaneous display. The prior art also depicts the simultaneous display of a rear pickup region and a composed bird’s-eye view image. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the bird’s-eye view video generation of Sakuragi with the image signal processing of Yamashita with the rear view mode of Inui with the image display technique of Chinomi which processes both conversion of a direct image and composition of a bird’s-eye view image for simultaneous display. This allows for checking of relative positions between the two different perspectives to be checked on the same screen, simultaneously. Claim(s) 16,19-20 rejected under 35 U.S.C. 103 as being unpatentable over Sakuragi; Tomoki (US 20190281229 A1) in view of Yamashita; Yutaro et al. (US 20220084257 A1) in view of Chinomi; Satoshi et al. (US 20060274147 A1) in view of INUI; Yoji et al. (US 20180253106 A1) Regarding claim 16, Sakuragi teaches, A system (Title, ¶44-45, and Fig. 1, “bird’s eye-view video generation device” such as “bird's-eye view video generation device 40” including a controller 41 depicted in fig. 1) comprising: a camera interface, (¶45 and Fig. 1, “controller 41 includes the video data acquisition unit 42”) a video processing circuit coupled to the camera interface, (¶45,60, and Fig. 1, “bird's-eye view video generator 45” included in controller 41 connected to “video data acquisition unit 42” depicted in Fig. 1) a first processor core coupled to the video processing circuit, (¶60 and Fig. 1, “cut-out processor 452” included in bird's-eye view video generator 45 depicted in Fig. 1) a second processor core coupled to the video processing circuit, (¶60 and Fig. 1, “synthesis processor 453” included in bird's-eye view video generator 45 depicted in Fig. 1) and a display interface coupled to the video processing circuit, (¶45 and Fig. 1, “display controller 48” connected to bird's-eye view video generator 45 depicted in Fig. 1) in which: the camera interface (¶45-46 and Fig. 1, controller 41 includes “video data acquisition unit 42” acquires surroundings video data output by “the front camera 11, the rear camera 12, the left-side camera 13 and the right-side camera 14”) is configured to receive a sourcerear view video signal, (¶46, video data output by “rear camera 12”) a first side view video signal, (¶46, video data output by “left-side camera 13”) a second side view video signal, (¶46, video data output by “right-side camera 14”) and a front view video signal; (¶46, video data output by “front camera 11”) the first processor core (¶68,60, and Fig. 1, “cut-out processor 452” included in bird's-eye view video generator 45 depicted in Fig. 1) is configured to cause the video processing circuit (¶68,60,45, and Fig. 1, executed commands contained in the programs to “cut-out processor 452” performs cut-out processing of cutting videos in the “bird's-eye view video generator 45” depicted in Fig. 1) to: provide a rear view image processing path; (¶68 and Fig. 1, “cut-out processor 452 performs cut-out processing of cutting videos of predetermined areas” which includes cut-out processor 452 cutting “a backward cut area from the surroundings video data out of the rear camera 12”) propagate one of the source rear view image signal (¶68, cut-out processor 452 output “backward cut area from the surroundings video data out of the rear camera 12”) and the duplicate rear view video signal, as a propagated rear view image signal, (¶68, cut-out processor 452 output “backward cut area from the surroundings video data out of the rear camera 12” to the synthesis processor 453) through the rear view image processing path; (¶68 and Fig. 1, cut-out processor 452, as part of bird’s-eye view video generator 45 depicted in fig. 1, outputs video image data of the videos obtained including “backward cut area from the surroundings video data out of the rear camera 12” to the “synthesis processor 453”) and the second processor core (¶71,60, and Fig. 1, “synthesis processor 453” included in bird's-eye view video generator 45 depicted in Fig. 1) is configured to cause the video processing circuit (¶71,60,45, and Fig. 1, executed commands contained in programs to “synthesis processor 453” generates the bird's-eye view video in the “bird's-eye view video generator 45” depicted in Fig. 11) to: provide the surround view image processing path (¶71 and Fig. 1, synthesis processor 453 “generates the bird's-eye view video 100” by “synthesizing the videos that are cut out by the cut-out processor 452”) receive the transferred rear view image signal (¶68 and 71, “backward cut area from the surroundings video data out of the rear camera 12” cut by cut-out processor 452 output to the “synthesis processor 453”) from the rear view image processing path; (¶68 and 71, “cut-out processor 452 cuts a backward cut area from the surroundings video data out of the rear camera 12” output to the “synthesis processor 453”) and provide a surround view image (¶71 and 68, “synthesis processor 453 generates the bird's-eye view video 100” by synthesizing “the videos that are cut out by the cut-out processor 452”) in the surround view image processing path (¶71 and 68, “synthesis processor 453”) based in part on the transferred rear view image; (¶71 and 68, videos that are cut out by the cut-out processor 452 including “backward cut area from the surroundings video data out of the rear camera 12”) and the display interface (¶77, Fig. 1 and 3, “display controller 48” causes the display panel 30 to display the bird's-eye view video 10”) is configured to provide, for display, the surround view image, (¶77 and Fig. 1, “display controller 48” causes the display panel 30 to “display the bird's-eye view video 100”) But does not explicitly teach, generate a duplicate rear view video signal based on the source rear view video signal; transfer the other of the source rear view image signal and the duplicate rear view video signal, as a transferred rear view image signal, from the rear view image processing path to a surround view image processing path; receive the transferred rear view image signal from the rear view image processing path; provide the surround view image processing path after the propagated rear view image signal is propagated through the rear view image processing path; the display interface is configured to provide a rear view image, based on the transferred rear view image signal, wherein the rear view image is generated and duplicated before the surround view image processing path is provided. However, Yamashita teaches additionally, generate a duplicate rear view video signal (¶67 and fig. 1, splitter 21 “supplies an image 1 captured by the camera 10” to two processing units as depicted in fig. 1) based on the source rear view video signal; (¶67,60, and fig. 2, “camera 10” arranged at “a rear of the vehicle body” to capture the rear of the vehicle body 60” depicted in fig. 2) transfer the other of the source rear view image signal and the duplicate rear view video signal, (¶67 and fig. 1, splitter 21 “supplies an image 1 captured by the camera 10” to “CMS signal processing unit 22”) as a transferred rear view image signal, (¶67,60, and fig. 2, image 1 of the “rear of the vehicle body” captured by the camera 10 supplied to “CMS signal processing unit 22”) from the rear view image processing path (¶67,60, and fig. 2, “splitter 21” supplied image 1 captured by “camera 10” arranged at “rear of the vehicle body” to capture the rear of the vehicle body 60 depicted in fig. 2) to a surround view image processing path; (¶67, splitter 21 supplies image 1 to ”rear view signal processing unit 23”) receive the transferred rear view image
Read full office action

Prosecution Timeline

Aug 31, 2023
Application Filed
Jan 09, 2025
Non-Final Rejection — §103
Apr 15, 2025
Response Filed
Jul 03, 2025
Final Rejection — §103
Nov 07, 2025
Request for Continued Examination
Nov 13, 2025
Response after Non-Final Action
Nov 14, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604034
METHOD FOR PARTITIONING BLOCK AND DECODING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596190
MILLIMETER WAVE DISPLAY ARRANGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581086
MERGE WITH MVD BASED ON GEOMETRY PARTITION
2y 5m to grant Granted Mar 17, 2026
Patent 12563112
SPATIALLY UNEQUAL STREAMING
2y 5m to grant Granted Feb 24, 2026
Patent 12554017
EBS/TOF/RGB CAMERA FOR SMART SURVEILLANCE AND INTRUDER DETECTION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
84%
With Interview (+28.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 302 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month