DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Response to Amendment
The Amendment filed on 9/5/2025 has been entered. Claims 2 has been canceled, claim 14 has been added. Claims 1 and 3-14 remain pending in the application. Applicant’s amendments to the abstract and claim have overcome previous objection and 101 rejection.
Drawings
The drawings are objected to because there is no description in each block with the number. For example in fig. 1, please label “media server” along with number “49” in the box, and label “Internet” along with number “48” in the cloud.
Appropriate correction is recommended.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim 1 and 12 are objected to because of the following informalities:
Claim 1 and 12 recite “stop rendering light”, and this limitation should be placed after “analyze said video content, determine said one or more light effects based on said analysis of said video content in dependence on whether said display device will display an overlay on top of said video content”; another thought is if “stop rendering light”, it’s called “replace” but not an “overlay”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 3-5, 7-8 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ishibashi U.S. Patent Application 20180211635 in view of Kagawa U.S. Patent Application 20170064178, and further in view of Dharmaji U.S. Patent Application 20110004892.
Regarding claim 1, Ishibashi discloses a system for determining one or more light effects based on an analysis of video content and further arranged for controlling one or more lighting devices to render said one or more light effects (paragraph [0053]: At Step S20, the controller 13 acquires recognition information from the detection unit 5. The recognition information acquired at Step S20 is at least one of information on a target on which the virtual image S is to be superimposed and displayed and information on a region; paragraph [0006]: the projection unit adjusts a luminance distribution of the virtual image in accordance with a luminance distribution of a region of the superimposing display target in the front image), said system comprising:
at least one input interface (imaging unit 2) arranged for receiving a video signal, comprising said video content, and further arranged for receiving a further signal being indicative of one or more commands transmitted by a further device (controller 13) to a display device (paragraph [0038]: The imaging unit 2 takes an image of a region in front of the vehicle 100 to generate a front image 20 (see FIG. 3) that is an image of the front; paragraph [0043]: The controller 13 is a superimposing rendering device, and is a device configured to generate instructions for images and videos to be displayed on the display device 11... the controller 13 is a computer including an arithmetic unit, a storage unit, and an interface unit);
at least one output interface (controller 13) arranged for outputting said video signal to a display device and further arranged for controlling said one or more lighting devices (paragraph [0037]: The display device 11 displays the virtual image S so as to be superimposed on a scene in the front visual field on the basis of information from a controller 13 described later… The display device 11 includes a liquid crystal display unit and a backlight; paragraph [0043]: the controller 13 is a computer including an arithmetic unit, a storage unit, and an interface unit);
and at least one processor (controller 13) configured to:
receive said video signal via said at least one input interface, output, via said at least one output interface, said video signal to said display device for displaying said video content (paragraph [0043]: the controller 13 is a computer including an arithmetic unit, a storage unit, and an interface unit; paragraph [0038]: The imaging unit 2 takes an image of a region in front of the vehicle 100 to generate a front image 20 (see FIG. 3) that is an image of the front),
receive said further signal via said at least one input interface, determine, based on said further signal, whether said display device will display an overlay on top of said video content (paragraph [0043]: The controller 13 is a superimposing rendering device, and is a device configured to generate instructions for images and videos (further signal) to be displayed on the display device 11... the controller 13 is a computer including an arithmetic unit, a storage unit, and an interface unit; paragraph [0053]: At Step S20, the controller 13 acquires recognition information from the detection unit 5. The recognition information acquired at Step S20 is at least one of information on a target on which the virtual image S is to be superimposed and displayed and information on a region in which the virtual image S is to be superimposed and displayed),
analyze said video content, determine said one or more light effects based on said analysis of said video content in dependence on said display device will display an overlay on top of said video content (paragraph [0041]: A detection unit 5 detects a superimposing display target in front of the vehicle 100. The superimposing display target is a target on which the virtual image S is to be superimposed. The projection unit 7 determines a display position of the virtual image S such that the virtual image S is superimposed on the superimposing display target when viewed by the driver D; paragraph [0058]: At Step S60, the controller 13 acquires and smooths the brightness (grayscale values) for each divided block), and
control, via said at least one output interface, said one or more lighting devices to render said determined one or more light effects (paragraph [0063]: At Step S100, the controller 13 combines the front image 20 obtained by smoothing the brightness for each block with a superimposing display. The controller 13 sets the luminance of each portion in the virtual image S to be displayed on the basis of the representative luminance value BR calculated at Step S60... The divided region 61 is displayed lighter as the representative luminance value BR becomes higher, and the divided region 61 is displayed darker as the representative luminance value BR becomes lower; paragraph [0094]: In the case of filling the superimposing display target, the superimposing display target may be rendered such that the superimposing display target can be viewed behind the virtual image S. A plurality of virtual images S may be displayed in a region superimposed on a superimposing display target).
Ishibashi discloses all the features with respect to claim 1 as outlined above. However, Ishibashi fails to disclose light effects based on whether said display device will display an overlay explicitly, and upon determining that said display device will display an overlay on top of said video content, control said one or more lighting devices to stop rendering light.
Kagawa discloses light effects based on whether said display device will display an overlay (paragraph [0072]: The image processing unit 45 calculates the illumination time pixel signal by using the formula (1) or the formula (2) according to whether a pixel is located in an overlap line or a non-overlap line for each pixel, and outputs the calculated illumination time pixel signal as an image signal for each frame).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi’s to display brightness based on overlapping condition for each pixel as taught by Kagawa, to achieve appropriate luminance for image frame.
Ishibashi as modified by Kagawa discloses all the features with respect to claim 1 as outlined above. However, Ishibashi as modified by Kagawa fails to disclose upon determining that said display device will display an overlay on top of said video content, control said one or more lighting devices to stop rendering light.
Dharmaji discloses upon determining that said display device will likely display an overlay on top of said video content, control said one or more lighting devices to stop rendering light (paragraph [0023]: a command is provided to stop rendering the multimedia content with a further command (switch/inlay/overlay) to insert the alternate content; Dharmaji’s teaching of stop rendering multimedia content and overlay alternate content can be combined with Ishibashi and Kawaga device, such that when determine to display an overlay on video content, send out a command as a trigger event to stop rendering pixel light for multimedia content and overlaying alternate content).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi and Kagawa’s to stop rendering based on overlay command as taught by Dharmaji, to display alternate content.
Regarding claim 3, Ishibashi as modified by Kagawa and Dharmaji discloses the system as claimed in claim 1, wherein said at least one processor is configured to, upon determining that said display device will display an overlay on top of said video content, determine said one or more light effects such that a noticeability of said one or more light effects is gradually reduced (Ishibashi’s paragraph [0073]: When a portion with low luminance is present in the target region 60, the projection unit 7 in the present embodiment decreases luminance of the portion 41a in the virtual image S that corresponds to the portion with low luminance. When the luminance distribution of the virtual image S is adjusted in this manner, the driver D can more easily view the dark background).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi’s to display brightness based on overlapping condition for each pixel as taught by Kagawa, to achieve appropriate luminance for image frame; and combine Ishibashi and Kagawa’s to stop rendering based on overlay command as taught by Dharmaji, to display alternate content.
Regarding claim 4, Ishibashi as modified by Kagawa and Dharmaji discloses the system as claimed in claim 1, wherein said at least one processor is configured to, upon determining that said display device will display an overlay on top of said video content, determine one or more default and/or user-defined light effects, and control, via said at least one output interface, said one or more lighting devices to render said one or more default and/or user-defined light effects (Ishibashi’s paragraph [0066]: the corrected luminance value BT may be any value of predetermined (default) grayscale values in stages; paragraph [0041]: A detection unit 5 detects a superimposing display target in front of the vehicle 100. The superimposing display target is a target on which the virtual image S is to be superimposed. The projection unit 7 determines a display position of the virtual image S such that the virtual image S is superimposed on the superimposing display target when viewed by the driver D).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi’s to display brightness based on overlapping condition for each pixel as taught by Kagawa, to achieve appropriate luminance for image frame; and combine Ishibashi and Kagawa’s to stop rendering based on overlay command as taught by Dharmaji, to display alternate content.
Regarding claim 5, Ishibashi as modified by Kagawa and Dharmaji discloses the system as claimed in claim 1, wherein said at least one processor is configured to, upon determining that said display device will display an overlay on top of said video content, determine one or more further light effects associated with said overlay, and control, via said at least one output interface, said one or more lighting devices to render said one or more further light effects (Ishibashi’s paragraph [0063]: At Step S100, the controller 13 combines the front image 20 obtained by smoothing the brightness for each block with a superimposing display. The controller 13 sets the luminance of each portion in the virtual image S to be displayed on the basis of the representative luminance value BR calculated at Step S60... The divided region 61 is displayed lighter as the representative luminance value BR becomes higher, and the divided region 61 is displayed darker as the representative luminance value BR becomes lower (further light effects)).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi’s to display brightness based on overlapping condition for each pixel as taught by Kagawa, to achieve appropriate luminance for image frame; and combine Ishibashi and Kagawa’s to stop rendering based on overlay command as taught by Dharmaji, to display alternate content.
Regarding claim 7, Ishibashi as modified by Kagawa and Dharmaji discloses the system as claimed in claim 1, wherein said at least one processor is configured to determine said one or more light effects based on said analysis of said video content further in dependence on a size and/or shape of said overlay (Ishibashi’s paragraph [0085]: The shape of the target region 80 is determined so as to include a display region of the arrow 50. For example, the target region 80 is determined on the basis of map information and information on the current position of the vehicle 100 acquired from the navigation device 4; paragraph [0067]: At Step S110, the controller 13 outputs a superimposing display image. The controller 13 controls the display device 11 to display the frame 40A illustrated in FIG. 9 in which the luminance has been corrected. The display device 11 generates an image of the frame 40A in which the luminance has been corrected).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi’s to display brightness based on overlapping condition for each pixel as taught by Kagawa, to achieve appropriate luminance for image frame; and combine Ishibashi and Kagawa’s to stop rendering based on overlay command as taught by Dharmaji, to display alternate content.
Regarding claim 8, Ishibashi as modified by Kagawa and Dharmaji discloses the system as claimed in claim 1, wherein said at least one processor is configured to determine whether said display device will display an overlay on top of said video content further based on data exchanged between said display device and one or more other devices (Ishibashi’s paragraph [0091]: at Step S80, acquires the brightness in a front field of view. At Step S90, the controller 13 recognizes the ambient brightness; paragraph [0092]: At Step S100, the controller 13 combines the image 20 obtained by smoothing the brightness for each block with a superimposing display. The controller 13 adjusts the luminance value BT of each portion in the arrow 50 on the basis of the representative luminance value BR calculated at Step S60. The controller 13 corrects the luminance level in the entire arrow 50 in accordance with the result of recognizing the ambient brightness; paragraph [0053]: At Step S20, the controller 13 acquires recognition information from the detection unit 5. The recognition information acquired at Step S20 is at least one of information on a target on which the virtual image S is to be superimposed and displayed and information on a region in which the virtual image S is to be superimposed and displayed).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi’s to display brightness based on overlapping condition for each pixel as taught by Kagawa, to achieve appropriate luminance for image frame; and combine Ishibashi and Kagawa’s to stop rendering based on overlay command as taught by Dharmaji, to display alternate content.
Regarding claim 11, Ishibashi as modified by Kagawa and Dharmaji discloses the system as claimed in claim 1, wherein said at least one processor is configured to, upon determining that said display device will display an overlay on top of said video content, determine one or more further light effects based on said analysis of said video content and based on content of said overlay and control, via said at least one output interface, said one or more lighting devices to render said one or more further light effects (Ishibashi’s paragraph [0063]: At Step S100, the controller 13 combines the front image 20 obtained by smoothing the brightness for each block with a superimposing display. The controller 13 sets the luminance of each portion in the virtual image S to be displayed on the basis of the representative luminance value BR calculated at Step S60... The divided region 61 is displayed lighter as the representative luminance value BR becomes higher, and the divided region 61 is displayed darker as the representative luminance value BR becomes lower (further light effects)).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi’s to display brightness based on overlapping condition for each pixel as taught by Kagawa, to achieve appropriate luminance for image frame; and combine Ishibashi and Kagawa’s to stop rendering based on overlay command as taught by Dharmaji, to display alternate content.
Claim 12 recites the functions of the apparatus recited in claim 1 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 1 applies to the method steps of claim 12.
Claim 13 recites the functions of the apparatus recited in claim 1 as computer program steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 1 applies to the computer program steps of claim 13.
Regarding claim 14, Ishibashi as modified by Kagawa and Dharmaji discloses a lighting fixture comprising: the system of claim 1; and
the one or more lighting devices (Ishibashi’s paragraph [0054]: a position at which light of each pixel of the display device 11 is superimposed when viewed by the driver D).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi’s to display brightness based on overlapping condition for each pixel as taught by Kagawa, to achieve appropriate luminance for image frame; and combine Ishibashi and Kagawa’s to stop rendering based on overlay command as taught by Dharmaji, to display alternate content.
Claim 6 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Ishibashi U.S. Patent Application 20180211635 in view of Kagawa U.S. Patent Application 20170064178, in view of Dharmaji U.S. Patent Application 20110004892, and further in view of Jordan U.S. Patent Application 20030078784.
Regarding claim 6, Ishibashi as modified by Kagawa and Dharmaji discloses determining said one or more light effects based on said analysis of said video content and control said one or more lighting devices to render said one or more light effects (Ishibashi’s paragraph [0043]: The controller 13 is a superimposing rendering device, and is a device configured to generate instructions for images and videos to be displayed on the display device 11; paragraph [0041]: A detection unit 5 detects a superimposing display target in front of the vehicle 100. The superimposing display target is a target on which the virtual image S is to be superimposed. The projection unit 7 determines a display position of the virtual image S such that the virtual image S is superimposed on the superimposing display target when viewed by the driver D; paragraph [0058]: At Step S60, the controller 13 acquires and smooths the brightness (grayscale values) for each divided block). However, Ishibashi as modified by Kagawa and Dharmaji fails to disclose determining that said one or more commands instruct said display device to exit a menu.
Jordan discloses determining that said one or more commands instruct said display device to exit a menu (paragraph [0109]: virtual buttons--one for main menu and the other one for exit to make the overlay disappear).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi, Kagawa and Dharmaji’s to use exit menu command as taught by Jordan, to provide a convenient user interface.
Regarding claim 9, Ishibashi as modified by Kagawa, Dharmaji and Jordan discloses the system as claimed in claim 1, wherein said at least one processor is configured to determine whether said display device will display an overlay on top of said video content based on content of said one or more commands (Jordan’s paragraph [0102]: the help overlay always shows the commands for accessing the main menu overlay and "more help" from the user center. Also, the help overlay explains the speakable text indicator, if it is activated. Note that the help overlay helps the cable subscriber use and spoken commands; paragraph [0106] "Main Menu" command to display main menu overlay; Ishibashi’s paragraph [0043]: The controller 13 is a superimposing rendering device, and is a device configured to generate instructions for images and videos to be displayed on the display device 11).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi, Kagawa and Dharmaji’s to use exit menu command as taught by Jordan, to provide a convenient user interface.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Ishibashi U.S. Patent Application 20180211635 in view of Kagawa U.S. Patent Application 20170064178, in view of Dharmaji U.S. Patent Application 20110004892, and further in view of Unger U.S. Patent Application 20070283394.
Regarding claim 10, Ishibashi as modified by Kagawa and Dharmaji discloses all the features with respect to claim 1 as outlined above. However, Ishibashi as modified by Kagawa and Dharmaji fails to disclose displaying an overlay based on a quantity and/or type and/or duration of said one or more commands.
Unger discloses displaying an overlay based on a quantity and/or type and/or duration of said one or more commands (paragraph [0050]: Four command types are used to build scripts--PLAY, SHOW, OVERLAY, and SUBSCRIPT; see table after paragraph [0050], display overlay based on type of command).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Ishibashi, Kagawa and Dharmaji’s to display overlay based on command as taught by Unger, to facilitate the presentation of content on video display.
Response to Arguments
Applicant's arguments filed 9/5/2025, page 8 - 9, with respect to the rejection(s) of claim(s) 1 and 12 under 103, have been fully considered and are moot upon a new ground(s) of rejection made under 35 U.S.C. 103 as being unpatentable over Ishibashi U.S. Patent Application 20180211635 in view of Kagawa U.S. Patent Application 20170064178, and further in view of Dharmaji U.S. Patent Application 20110004892, as outlined above.
Applicant argues on page 8 that the modified system would control one or more lighting devices to render alternative content for a set period of time before returning to the original content. Furthermore, there is nothing in Dharmaji, Kawaga, or Ishibashi that suggests using determination that a display device will display an overlay on top of video content as a triggering event, either for stopping rendering of light or for rendering alternative content.
In reply, the rejection is based on Ishibashi, Kawaga and Dharmaji combined.
Ishibashi discloses determining whether said display device will display an overlay on top of said video content (paragraph [0043]: The controller 13 is a superimposing rendering device, and is a device configured to generate instructions for images and videos (further signal) to be displayed on the display device 11... the controller 13 is a computer including an arithmetic unit, a storage unit, and an interface unit; paragraph [0053]: At Step S20, the controller 13 acquires recognition information from the detection unit 5. The recognition information acquired at Step S20 is at least one of information on a target on which the virtual image S is to be superimposed and displayed and information on a region in which the virtual image S is to be superimposed and displayed).
Dharmaji discloses upon determining that said display device will likely display an overlay on top of said video content, control said one or more lighting devices to stop rendering light (paragraph [0023]: a command is provided to stop rendering the multimedia content with a further command (switch/inlay/overlay) to insert the alternate content). Dharmaji’s teaching of stop rendering multimedia content and overlay alternate content can be combined with Ishibashi and Kawaga device, such that when determine to display an overlay on video content, send out a command as a trigger event to stop rendering pixel light for multimedia content and overlaying alternate content.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/YI YANG/
Primary Examiner, Art Unit 2616