DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-3, 6, 7, 11, and 13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 3 and 13 (including the limitations of parent claims 1 and 4, respectively) of U.S. Patent No. 12,106,429. Although the claims at issue are not identical, they are not patentably distinct from each other because the patented claims recite each limitation (or a trivial variation) of the current claims.
The following table illustrates a mapping of the conflicting claims:
Current Application
1-3, 6, 7, 11, 13
15
U.S. Patent 12,106,429
3 (incl. parent claim 1)
13 (incl. parent claim 4)
The following table illustrates a sample mapping of the limitations of claim 1 of the current application when compared against the pertinent limitations of claim 3 (including the limitations of parent claim 1) of the patent. The remaining claims can be mapped in a similar manner.
Current Application
Copending Application
A computer-implemented light emission control method comprising:
(claim 1) A computer-implemented display control method comprising:
acquiring a sound signal indicating sound collected in a target space, wherein a lighting signal is embedded in the sound signal,
(claim 1) acquiring distribution information … (claim 3) the distribution information comprises a sound signal indicating sound collected in the target space, and wherein the lighting signal is embedded in the sound signal.
the lighting signal being a signal for controlling a first lighting device provided in the target space;
(claim 1) a lighting signal which is a signal for controlling a physical lighting device provided in a target space
and controlling light to be emitted by a second lighting device based on the lighting signal embedded in the sound signal, the second lighting device being provided in a remote location that is remote from the target space.
(claim 1) and controlling light to be emitted by a virtual lighting device … based on the state information, wherein the virtual lighting device is arranged in the virtual space that is remote from the target space (NOTE: the “state information” is based on the lighting signal, and therefore this limitation teaches controlling the light based on the lighting signal, and claim 3 teaches that the lighting signal is embedded in the sound signal)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-8, 10-12, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Islam et al. (US 2004/0032536; hereinafter “Islam”) in view of Iwase et al. (US 2011/0023691; hereinafter “Iwase”).
Regarding claim 1, Islam teaches A computer-implemented light emission control method (“controls lighting,” para. 7) comprising: acquiring a sound signal indicating sound collected in a target space (“high definition audio signal recorded at the live stage,” para. 46), the lighting signal being a signal for controlling a first lighting device provided in the target space (“a lighting board at the live stage controls lighting trusses at the live stage,” para. 7); and controlling light to be emitted by a second lighting device based on the lighting signal … the second lighting device being provided in a remote location that is remote from the target space (“a lighting board at the live stage controls lighting trusses at the live stage as well as the virtual stages, in order to duplicate the lighting effects of the live performance at the virtual stages,” para. 7).
Islam does not disclose wherein a lighting signal is embedded in the sound signal.
In the same art of transmitting live performance data, Iwase teaches wherein a lighting signal is embedded in the sound signal (“The control device (musical performance-related information output device) receives a manipulation input for controlling an external apparatus (for example, … a stage-related device, such as an illumination or a camera, or the like). The control device generates a control signal, which controls the external apparatus, in accordance with the manipulation input. Then, the control device superimposes the control signal on the audio signal,” para. 199).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Iwase to Islam. The motivation would have been “The control device can easily control an external apparatus connected thereto only by outputting the audio signal on which the control signal is superimposed” (Iwase, para. 200).
Regarding claim 2, the combination of Islam and Iwase renders obvious acquiring state information indicating a state of the target space; and controlling the light to be emitted by the second lighting device based on the state information (“a lighting board at the live stage controls lighting trusses at the live stage as well as the virtual stages, in order to duplicate the lighting effects of the live performance at the virtual stages,” Islam, para. 7).
Regarding claim 3, the combination of Islam and Iwase renders obvious acquiring image information indicating a target image in which the target space is displayed; and displaying a virtual space on a display device provided in the remote location based on the image information, the virtual space in which the target image is displayed (“a high definition camera and a high definition audio recording system at the live stage. Signals from the camera and audio recording system may be reproduced on high definition projectors and surround sound audio systems which are specifically configured at the virtual stages,” Islam, para. 7).
Regarding claim 6, the combination of Islam and Iwase renders obvious wherein the state information comprises information indicating a state of sound or light in the target space (“the lighting effects of the live performance,” Islam, para. 7).
Regarding claim 7, the combination of Islam and Iwase renders obvious wherein the state information is generated based on the lighting signal for controlling the first lighting device provided in the target space (“a lighting board at the live stage controls lighting trusses at the live stage as well as the virtual stages, in order to duplicate the lighting effects of the live performance at the virtual stages,” Islam, para. 7).
Regarding claim 8, the combination of Islam and Iwase renders obvious wherein the state information further comprises information for indicating a representative color representing a color of a target image in which the target space is displayed (“a high definition camera … Signals from the camera and audio recording system may be reproduced on high definition projectors,” para. 7; an image from a high definition camera would include color data).
Regarding claim 10, the combination of Islam and Iwase renders obvious controlling sound to be output from a speaker device provided in the remote location based on the sound signal (“Signals from the camera and audio recording system may be reproduced on high definition projectors and surround sound audio systems which are specifically configured at the virtual stages,” Islam, para. 7).
Regarding claim 11, the combination of Islam and Iwase renders obvious wherein the first lighting device is a physical lighting device (“a lighting board at the live stage controls lighting trusses at the live stage,” Islam, para. 7).
Regarding claim 12, the combination of Islam and Iwase renders obvious wherein the second lighting device is a physical lighting device (“the virtual stage 102 [of Fig. 3] … include a trough 308 between the stage 102 and the screen 304 to minimize any spill light from a downstage lighting truss 310 onto the screen 304,” Islam, para. 33; this description shows that the lighting truss 310 of Fig. 3 at the virtual stage is a physical lighting device because it would not be necessary to “minimize any spill light” if the light was a virtual light).
Regarding claim 14, the combination of Islam and Iwase renders obvious wherein the lighting signal is separate from the target image (“generate a high definition audio signal,” Islam, para. 28; “project a high definition [image] signal with sufficient resolution to create an illusion of a live performance at a distance of 10 feet or greater from the projection screen,” Islam, para. 35; these two recitations from Islam show that the audio signal is separate from the image signal, and therefore in the combination of Islam and Iwase, where the lighting signal is embedded in the audio signal, the lighting signal would also be separate from the image signal).
Regarding claim 15, it is rejected using the same citations and rationales described in the rejection of claim 1, with the additional limitations of A control system comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the control system to perform operations (“lighting boards at the live and virtual stages are controlled by identical commands,” Islam, para. 48; processor/memory are implicit in the electronic control devices of Islam and Iwase).
Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Islam and Iwase, and further in view of Varshney et al. (US 2020/0118342; hereinafter “Varshney”).
Regarding claim 4, the combination of Islam and Iwase does not disclose acquiring position information indicating a viewing position in the virtual space; controlling the light to be emitted by the second lighting device based on the position information.
In the same art of transmitting live performance data, Varshney teaches acquiring position information indicating a viewing position in the virtual space (“the users' current viewpoint,” para. 124); controlling the light to be emitted by the second lighting device based on the position information (“use the 360 image/video at the users' current viewpoint as a lighting source when lighting virtual objects,” para. 124).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Varshney to the combination of Islam and Iwase. The motivation would have been “to create a photorealistic real environment which is dynamic and also navigable by a user” (Varshney, para. 3).
Regarding claim 5, the combination of Islam and Iwase does not disclose acquiring position information indicating a viewing position in the virtual space; displaying the virtual space on the display device based on the position information.
In the same art of transmitting live performance data, Varshney teaches acquiring position information indicating a viewing position in the virtual space (“the users' current viewpoint,” para. 124); displaying the virtual space on the display device based on the position information (“During rendering, the position of the virtual camera which may be used to render virtual objects is set to the estimated 360 camera position of current 360 viewpoint of the user,” para. 123).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Varshney to the combination of Islam and Iwase. The motivation would have been “to create a photorealistic real environment which is dynamic and also navigable by a user” (Varshney, para. 3).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Islam and Iwase, and further in view of Charlton et al. (US 2021/0074069; hereinafter “Charlton”).
Regarding claim 9, the combination of Islam and Iwase does not disclose wherein the state information further comprises information for indicating a color corresponding to a frequency of the sound collected in the target space.
In the same art of telecommunication and mixed reality, Charlton teaches wherein the state information further comprises information for indicating a color corresponding to a frequency of the sound collected in the target space (“analyzes the audio data to determine changes in at least one of the one or more audio characteristics of the audio data. For example, the analysis component may identify a change in volume detected in the audio data such as a transition within a song during a concert ... changes in frequency," para. 55; "a change in volume or frequency of the audio data may cause the components of the augmented reality system to modify the first graphical element (e.g., the aurora) to change opacity value, color value," para. 66).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Charlton to the combination of Islam and Iwase. The motivation would have been “to improve video communications between devices” (Charlton, para. 4).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Islam and Iwase, and further in view of Mcnelley et al. (US 2019/0227419; hereinafter “Mcnelley”).
Regarding claim 13, the combination of Islam and Iwase does not disclose wherein the second lighting device is a virtual lighting device provided in the virtual space.
In the same art of simulating a live stage performance, Mcnelley teaches wherein the second lighting device is a virtual lighting device provided in the virtual space (“Virtual stage lighting is disclosed for simulating stage lights and coordination of virtual stage lights with real stage lights for live and recorded performances,” abstract).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Mcnelley to the combination of Islam and Iwase. The motivation would have been “to appear ultra-realistic” (McNelley, para. 399).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN MCCULLEY/Primary Examiner, Art Unit 2611