DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35
U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on August 23, 2024 is in compliance with the provisions of 37 CFR 1.97 and have been considered by the Examiner.
Claim Rejections - 35 USC§ 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 7-8, 9-10 and 14-15 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Kondo et al. (US 2020/0189467).
In regard to claim 1, note Kondo discloses a video processing device (host vehicle A, Fig 1) comprising : a synthesis processing unit that generates a synthetic video signal by rendering a first video (70) included in a designated region in a screen transparent and synthesizing the first video and a second video in a manner in which the first video based on a first video signal is arranged in the screen and the second video (73) based on a second video signal different from the first video signal is arranged in the screen; and a memory( flash memory, [0099]) that stores transparency control data for specifying the designated region and indicating a transparency aspect (set at specified transmittance) of the first video in the designated region ( See Figs 5-7, [0076]).
In regard to claim 2, note Kondo discloses wherein the synthesis processing unit arranges the second video (73) to be superimposed on at least a part of the first video (70), and the designated region in the screen is in a region in which the first video and the second video are superimposed (See [0074]-[0076]and Fig 8).
In regard to claim 3, note Kondo discloses a data change control unit that overwrites the transparency control data stored in the memory with transparency control data for rewrite when the data change control unit receives the transparency control data for rewrite and a data change command (see Figure 10, S142 the he transmittance of the image transmission region 73 gradually decreases as an elapsed time after the detection information of the tracked object V2 disappears increases. [0076]).
In regard to claim 7, note Kondo discloses wherein the memory (flash memory) is a non-volatile memory in which the transparency control data is rewritable ([0099]).
In regard to claim 8, note Kondo discloses comprising a substrate (video image output device 100) provided with the synthesis processing unit and a connector that removably connects the memory (See Fig. 1).
In regard to claim 9, note Kondo discloses a video processing system host vehicle A, Fig. 1) comprising: a first camera (21) that outputs a first video signal (70) indicating a recorded video; a second camera (radar 23 can be on-board camera, see [0096]) that outputs a second video signal (73) indicating a recorded video; a display device with a single screen; a synthesis processing unit that generates a synthetic video signal by rendering a first video included in a designated region in the screen transparent and synthesizing the first video and a second video in a manner in which the first video based on the first video signal is arranged in the screen and the second video based on the second video signal different from the first video signal is arranged in the screen; a memory ( flash memory, [0099]) that stores transparency control data for specifying the designated region and indicating a transparency aspect of the first video in the designated region ( See Figs 5-7, [0076]); and a data change control unit that overwrites the transparency control data stored in the memory with transparency control data for rewrite when the data change control unit receives the transparency control data for rewrite and a data change command (see Figure 10, S142 the he transmittance of the image transmission region 73 gradually decreases as an elapsed time after the detection information of the tracked object V2 disappears increases. [0076]).
In regard to claim 10, has been analyzed and rejected as previously discussed with respect claims 2 & 9.
In regard to claim 14, has been analyzed and rejected as previously discussed with respect claims 7 & 9.
In regard to claim 15, has been analyzed and rejected as previously discussed with respect claims 8 & 9.
Allowable Subject Matter
Claims 4-6 and 11-13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 2020/0302657: note an imaging apparatus includes an image sensor and a controller. The image sensor is configured to capture a rear area behind a vehicle and generate a first video image. The controller is configured to synthesize a guide wall image that indicates a predicted path of the vehicle in a display region of the first video image.
US 2025/0292416: note the rectangle displayed in the missing region, a picture or a symbol imitating the tracking target or a cut-out of the video detected in the previous frame can also be arranged. In addition, the rectangle ay be superimposed on the video with increased transmittance.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lin Ye whose telephone number is (571)272- 7372. The examiner can normally be reached M-F 9:00-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the TC director, John Barlow can be reached on (571) 272-4550. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in
DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LIN YE/Supervisory Patent Examiner, Art Unit 2638