lkDETAILED ACTION
Acknowledgement is made of the preliminary amendment submitted on 07/29/2024. In virtue of this amendments:
Claim 2 is canceled;
Claim 15 is newly added;
Claims 1 and 3-13 are currently amended; and thus,
Claims 1 and 3-15 are pending;
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of this application’s status as a 371 of PCT/EP2023/051927 filed on 01/26/2023 which claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy of EP22154199.8 filed on 01/31/2022 has been received/retrieved by the office.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/29/2024 has been considered by the examiner.
Claim Objections
Claims 3-12 are objected to because of the following informalities:
Regarding claims 3-12, the preamble of the claim should start with a definite article, thus reading -The system as claimed in…-
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter because claim 14 recite “a computer program product”. A claim drawn to such a computer program product that covers both transitory and non-transitory embodiments (see paragraphs 27 and 33 of applicant’s published specification) may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 USC 101 by adding the limitation of “non-transitory” to the claim(s) (e.g. a non-transitory computer-readable medium).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 15, the claim recites “a plurality of lighting devices” which renders the claim indefinite, as it present improper antecedent basis and in unclear whether they are the same set of plurality of lighting devices recite on line 1 of claim 1 or different set of plurality of lighting devices.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-6 and 8-13 are rejected under 35 U.S.C. 103 as being unpatentable over US2020/0225751A1 hereinafter “Kim” in view of US2019/0230772A1 hereinafter “Reiss”
Regarding claims 1 and 13, Kim discloses a system and a method (¶34L1: wireless control system) for controlling a plurality of lighting devices (¶35L1: the haptic devices) to render light effects (¶34L12-14: the haptic devices provide a lighting effect) while an audio rendering system renders audio content (¶35L1-4: haptic pattern data corresponding to an audio signal), said system and method comprising:
at least one transmitter (¶43L9: a wireless data transmitter); and
at least one processor (¶91L4: a processor) configured to:
select a first subset and a second subset from said plurality of lighting devices (¶39L1-4: the haptic devices may be classified into user group A, B and C) based on a type of each of said plurality of lighting devices (¶39L4-5: user group A provide red light; user group B provide blue light), said second subset being different from said first subset,
obtain one or more first audio characteristics of said audio content (¶44L1-15: haptic data generator extracting an audio bit pattern from an audio signal of content data),
obtain, based on one or more types of said lighting devices in said second subset, one or more second audio characteristics of said audio content (¶44L1-15: haptic data generator extracting an audio bit pattern from an audio signal of content data), said one or more second audio characteristics being different from said one or more first audio characteristics (¶46L1-13: haptic pattern data corresponding to a bit pattern of a base and a drum)
determine a first set of effect parameter values based on said one or more first audio characteristics (¶46L5-8: the haptic pattern data generator may generate haptic pattern data corresponding to a bit pattern of a base based on concert data on a concert performance),
determine a second set of effect parameter values based on said one or more second audio characteristics (¶46L5-8: the haptic pattern data generator may generate haptic pattern data corresponding to a bit pattern of a drum based on concert data on a concert performance),
determine first effects with said first set of light effect parameter values (¶48L1-8: the wireless data generator generate wireless data including haptic pattern to be transmitted),
determine second effects with said first set of light effect parameter values and said second set of light effect parameter values (¶48L1-8: the wireless data generator generate wireless data including haptic pattern to be transmitted),
control, via said at least one transmitter, said first subset of lighting devices to render said first effects while said audio rendering system renders said audio content (¶49L1-7: the wireless data generator may generate wireless in real time based on a content bit pattern included in content data received in real time), and
control, via said at least one transmitter, said second subset of lighting devices to render said second effects while said audio rendering system renders said audio content. (¶49L1-7: the wireless data generator may generate wireless in real time based on a content bit pattern included in content data received in real time)
Kim does not expclitly disclose that the lighting effects are generated based on the audio characteristics, but rather based on stage lighting data (¶54)
McKinney discloses a control of light in response to an audio signal wherein
the lighting effects are generated based on the audio characteristics. (¶9-10: audio signal such as frequency, amplitude and time domain analysis are extracted information and used to control light source group with respect to brightness, saturation, hue and their spatial distribution temporal dynamics)
It would have been obvious to one ordinarily skilled in the art prior to the effective filing date of the application to modify the lighting effect disclosed by Kim to be generated based on sound activate program disclosed by McKinney.
One of ordinary skill in the art would’ve been motivated because it becomes possible to control light sources to provide a light effect time aligned with presentation of the audio signal in a way that there is a perceived agreement between the light effect and the content of the audio signal, e.g. a piece of music. Especially it is obtained that changes in the music are properly reflected or underlined by the light, and the method is well suited for automatic control of light for a large variety of audio signals, e.g. different genres of music, while maintaining a good correspondence between music and light. (McKinney ¶11)
Regarding claim 3, Kim in view of McKinney hereinafter “Kim/McKinney” discloses in Kim a system as claimed in claim 1, wherein
said at least one processor is configured to:
determine events in said audio content based on said one or more first audio characteristics, said one or more second audio characteristics, and/or one or more further audio characteristics of said audio content, said events corresponding to moments in said audio content when said audio characteristics meet predefined requirements, and determine said first light effects and said second light effects for said events. (¶58L1-14: the haptic pattern data generator may select a signal of at least one frequency band including desired bit pattern information from the audio signal of the plurality of frequency band)
Regarding claim 4, Kim/McKinney discloses in McKinney a system as claimed in claim 3, wherein said at least one processor is configured to:
obtain location information indicative of locations of said plurality of lighting devices, determine one or more audio source positions associated with an event of said events, select one or more first lighting devices from said first subset based on said one or more audio source positions and said locations of said lighting devices of said first subset, select one or more second lighting devices from said second subset based on said one or more audio source positions and said locations of said lighting devices of said second subset, control said one or more first lighting devices to render a first light effect of said first light effects and said one or more second lighting devices to render a second light effect of said second light effects, said first light effect and said second light effect being determined for said event. (¶49: With the five light sources L1-L5 spatially scattered in relation to loudspeakers S1-S5, it is possible to control the light sources L1-L5 such that the spatial ambient lighting experience corresponds to the spatial auditory experience when the light sources L1-L5 are controlled along with playback of the audio signal A via loudspeakers S1-S5. When the guitar player walks across the scene in a concert recording the guitar changes position in the auditory image, and a green light can then spatially change accordingly, e.g. from L1 via L2 to L3 along with the movement of the guitar player. This will enhance the total experience for listener/viewer 110.)
Regarding claim 5, Kim/McKinney discloses in Kim a system as claimed in claim 1, wherein
said at least one processor is configured to obtain said one or more first audio characteristics and said one or more second audio characteristics by receiving metadata describing at least some of said one or more first audio characteristics and said one or more second audio characteristics and/or to receive said audio content and analyze said audio content to determine at least some of said one or more first audio characteristics and said one or more second audio characteristics. (¶50L1-0: the wireless data generator generate wireless data by selecting at least one of stored haptic pattern data corresponding to at least one received content data or event effect data)
Regarding claim 6, Kim/McKinney discloses in McKinney the system as claimed in claim 1, said one or more first audio characteristics comprise at least one of loudness and energy and/or said first set of light effect parameter values comprises brightness values. (¶9-10: audio signal such as frequency, amplitude and time domain analysis are extracted information and used to control light source group with respect to brightness, saturation, hue and their spatial distribution temporal dynamics)
Regarding claim 8, Kim/McKinney discloses in McKinney the system as claimed in claim 1, wherein said one or more first audio characteristics comprises a duration of a beat in said audio content and said at least one processor is configured to determine, based on said duration of said beat, a duration of a light effect to be rendered during said beat, said duration of said light effect being one of said first set of light effect parameter values. (¶15-16: the determining of the sections of the audio signal includes extracting statistical data relating to at least one property selected from the group consisting of: section boundary, tempo, dynamic variation, musical key. The light control parameter may reflect a characteristic property of the audio signal (A) in each of the sections; the light control parameter may include information to control at least one light property selected from the group consisting of: brightness, saturation, hue, and their spatial distribution and temporal dynamics.)
Regarding claim 9, Kim/McKinney discloses in McKinney the system as claimed in claim 1, wherein said one or more first audio characteristics comprises tempo and said at least processor is configured to determine a speed of transitions between light effects based on said tempo, one or more parameter values of said first set of light effect parameter values being indicative of said speed of transitions between light effects. (¶15-16: the determining of the sections of the audio signal includes extracting statistical data relating to at least one property selected from the group consisting of: section boundary, tempo, dynamic variation, musical key. The light control parameter may reflect a characteristic property of the audio signal (A) in each of the sections; the light control parameter may include information to control at least one light property selected from the group consisting of: brightness, saturation, hue, and their spatial distribution and temporal dynamics.)
Regarding claim 10, Kim/McKinney discloses in McKinney the system as claimed in claim 1, wherein said one or more first audio characteristics and/or said one or more second audio characteristics comprise at least one of valence, key, timbre, and pitch and said at least one processor is configured to determine a color, a color temperature, or a color palette based on said valence, said key, said timbre, and/or said pitch and include in said first set and/or said second set of light effect parameter values one or more parameter values indicative of said color, said color temperature, or one or more colors selected from said color palette. (¶15-16: the determining of the sections of the audio signal includes extracting statistical data relating to at least one property selected from the group consisting of: section boundary, tempo, dynamic variation, musical key. The light control parameter may reflect a characteristic property of the audio signal (A) in each of the sections; the light control parameter may include information to control at least one light property selected from the group consisting of: brightness, saturation, hue, and their spatial distribution and temporal dynamics.)
Regarding claim 11, Kim/McKinney discloses in Kim a system as claimed in claim 1, wherein said at least one processor is configured to:
select a third subset from said plurality of lighting devices based on said type of each of said plurality of lighting devices (¶39L1-4: the haptic devices may be classified into user group A, B and C), said third subset being different from said first subset (¶39L10-13: user group C may provide green light),
obtain, based on said one or more types of said lighting devices in said third subset, one or more third audio characteristics of said audio content, said one or more third audio characteristics being different from said one or more first audio characteristics and said one or more second audio characteristics, determine a third set of light effect parameter values based on said one or more third audio characteristics, determine third light effects with said first set of light effect parameter values and said third set of light effect parameter values, and control, via said at least one transmitter, said third subset of lighting devices to render said third light effects while said audio rendering system renders said audio content. (¶44L1-15: haptic data generator extracting an audio bit pattern from an audio signal of content data; ¶48L1-8: the wireless data generator generate wireless data including haptic pattern to be transmitted; ¶49L1-7: the wireless data generator may generate wireless in real time based on a content bit pattern included in content data received in real time)
Regarding claim 12, Kim/Reiss discloses in Kim a system as claimed in claim 11, wherein said second subset and said third subset are different. (¶39L4-13: user group A provide red light; user group B provide blue light; user group C may provide green light)
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kim/Reiss in view of WO2019026236A1 hereinafter “Yoshino”
Regarding claim 7, Kim/Reiss discloses a system as claimed in claim 1,
Kim does not expclitly disclose
said one or more second audio characteristics comprise a dynamicity level of said audio content and/or a genre of said audio content, said second subset of said plurality of lighting devices comprises only multi pixel lighting devices, and said at least one processor is configured to determine a plurality of colors to be rendered on said multi pixel lighting devices based on said dynamicity level and/or said genre and include in said second set of light effect parameter values one or more parameter values indicative of said plurality of colors.
Yoshino discloses a system wherein a illumiation control signal is generated based on song genre. (Page.1: The illumination control unit 4 performs illumination control of the illumination device 5 based on the illumination mode output from the song analysis device 1 and executes illumination effect according to the song genre.)
It would have been obvious to one ordinarily skilled in the art prior to the effective filing date of the application to modify the lighting effect disclosed by Kim/McKinney to be generated based on song genre as disclosed by Yoshino.
One of ordinary skill in the art would’ve been motivated because it allows the illumination system to generate illumiation mode based on the different types of music genre.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAYMOND R CHAI whose telephone number is (571)270-0576. The examiner can normally be reached M-F 9:30AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander H Taningco can be reached at (571)272-8048. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Raymond R Chai/ Primary Examiner, Art Unit 2844