DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5, 7-10, 12, 14-17, 19 are rejected under 35 U.S.C. 103 as being unpatentable over by Ramaswamy et al. (US Pat. 7,609,853) in view of Luff (US Pub. 2010/0195865) and in further view of Mandal et al. (US Pub. 2020/0349928), herein referenced as Ramaswamy, Luff, and Mandal, respectively.
Regarding claim 1, Ramaswamy discloses “An audience measurement system comprising: a processor and a memory having stored thereon machine readable instructions that, when executed by the processor (Col. 5 lines 48-65, Figs. 2-3), cause the audience measurement system to perform operations comprising:
providing, via a network and in response to a detected movement of a user in a media presentation area, an instruction to a … home device to output a request for verification of user presence in the media presentation area (Col. 2 lines 39-62, Col. 3 lines 11-20, Col. 3 lines 37-57, Col. 9 lines 11-51, Figs. 1-3, i.e., detecting motion of audience members and audience change detector prompting the audience to identify members if a change in the number of people in the audience is visually detected, wherein the prompt maybe be an audible sound);
obtaining, via the network and from the … home device, an audio speech input received from the user by the … home device… (Col 4 lines 4-12, Fig. 2, i.e., input device may be a microphone and voice recognition engine);
generating user presence data that correlates the audio speech input with media presented by a media presentation device; and storing the user presence data in a database.” (Col. 6 line 66-Col. 7 line 37, Figs. 4A-B, i.e. updating the database with input data (e.g., an audience member's identity)).
Ramaswamy teaches audience change detector that prompts the audience to identify members if a change in the number of people in the audience is visually detected, however fails to explicitly disclose a smart home device.
Luff teaches the technique of providing a smart home device for audience measurement ([0028], [0113], [0115], [0120]-[0121], Figs. 13-15, i.e., smart speaker collects audience measurement data). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a smart home device for audience measurement as taught by Luff, to improve the audience detection system of Ramaswamy for the predictable result of utilizing a ubiquitous and affordable smart device to gather audience data that can also provide a variety of hands-free assistance controls.
The combination fails to explicitly disclose providing, via a network, an instruction to notify the user that the audio speech input is received.
Mandal teaches the technique of providing, via a network, an instruction to notify the user that the audio speech input is received ([0028], [0030], Fig. 1, i.e., output audio data 121 may correspond to confirmation that the voice command was received (e.g., “Here is a playlist of electronic dance music.”) or other responsive data). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing, via a network, an instruction to notify the user that the audio speech input is received as taught by Mandal, to improve the audience detection system of Ramaswamy for the predictable result of providing the user with assurance that the voice command was heard and understood.
Regarding claim 2, Ramaswamy discloses “wherein the request for verification of user presence in the media presentation area comprises an audio request provided through a speaker of the … home device.” (Col. 3 lines 37-57, i.e., the prompter may be an audible sound).
Regarding claim 3, Ramaswamy discloses “wherein the operations further comprise: after providing the instruction to the … home device to output the request for verification of user presence, activating a flash indicator of a meter device disposed in the media presentation area.” (Col. 3 lines 37-57, i.e., the prompter may be a flashing light).
Regarding claim 5, Ramaswamy discloses “wherein the operations further comprise: performing speech analysis of the audio speech input to detect a user identifier associated with the user.” (Col 4 lines 4-12, Fig. 2, i.e., input device may be a microphone and voice recognition engine).
Regarding claim 7, Ramaswamy discloses “wherein correlating the audio speech input with media presented by the media presentation device comprises: correlating one or more timestamps associated with the audio speech input with one or more timestamps associated with motion signal data obtained by a motion sensor.” (Col. 4 lines 13-28, Col. 6 line 66-Col. 7 line 9, Col. 7 lines 56-67, i.e., an audience change detector utilizing a voice recognition engine and a time stamper recording a time and date that an audience change occurred).
Regarding claim 8, Ramaswamy discloses “A non-transitory machine readable storage medium comprising instructions that , when executed, cause a processor to perform operations (Col. 5 lines 48-65, Figs. 2-3) comprising:
providing, via a network and in response to a detected movement of a user in a media presentation area, an instruction to a … home device to output a request for verification of user presence in the media presentation area (Col. 2 lines 39-62, Col. 3 lines 11-20, Col. 3 lines 37-57, Col. 9 lines 11-51, Figs. 1-3, i.e., detecting motion of audience members and audience change detector prompting the audience to identify members if a change in the number of people in the audience is visually detected, wherein the prompt maybe be an audible sound);
obtaining, via the network and from the … home device, an audio speech input received from the user by the … home device… (Col 4 lines 4-12, Fig. 2, i.e., input device may be a microphone and voice recognition engine);
generating user presence data that correlates the audio speech input with media presented by a media presentation device; and storing the user presence data in a database.” (Col. 6 line 66-Col. 7 line 37, Figs. 4A-B, i.e. updating the database with input data (e.g., an audience member's identity)).
Ramaswamy teaches audience change detector that prompts the audience to identify members if a change in the number of people in the audience is visually detected, however fails to explicitly disclose a smart home device.
Luff teaches the technique of providing a smart home device for audience measurement ([0028], [0113], [0115], [0120]-[0121], Figs. 13-15, i.e., smart speaker collects audience measurement data). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a smart home device for audience measurement as taught by Luff, to improve the audience detection system of Ramaswamy for the predictable result of utilizing a ubiquitous and affordable smart device to gather audience data that can also provide a variety of hands-free assistance controls.
The combination fails to explicitly disclose providing, via a network, an instruction to notify the user that the audio speech input is received.
Mandal teaches the technique of providing, via a network, an instruction to notify the user that the audio speech input is received ([0028], [0030], Fig. 1, i.e., output audio data 121 may correspond to confirmation that the voice command was received (e.g., “Here is a playlist of electronic dance music.”) or other responsive data). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing, via a network, an instruction to notify the user that the audio speech input is received as taught by Mandal, to improve the audience detection system of Ramaswamy for the predictable result of providing the user with assurance that the voice command was heard and understood.
Regarding claim 9, claim 9 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 2.
Regarding claim 10, claim 10 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 3.
Regarding claim 12, claim 12 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 5.
Regarding claim 14, claim 14 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 7.
Regarding claim 15, Ramaswamy discloses “A method comprising: providing, via a network and in response to a detected movement of a user in a media presentation area, an instruction to a … home device to output a request for verification of user presence in the media presentation area (Col. 2 lines 39-62, Col. 3 lines 11-20, Col. 3 lines 37-57, Col. 9 lines 11-51, Figs. 1-3, i.e., detecting motion of audience members and audience change detector prompting the audience to identify members if a change in the number of people in the audience is visually detected, wherein the prompt maybe be an audible sound);
obtaining, via the network and from the … home device, an audio speech input received from the user by the … home device (Col 4 lines 4-12, Fig. 2, i.e., input device may be a microphone and voice recognition engine);
generating user presence data that correlates the audio speech input with media presented by a media presentation device; and storing the user presence data in a database.” (Col. 6 line 66-Col. 7 line 37, Figs. 4A-B, i.e. updating the database with input data (e.g., an audience member's identity)).
Ramaswamy teaches audience change detector that prompts the audience to identify members if a change in the number of people in the audience is visually detected, however fails to explicitly disclose a smart home device.
Luff teaches the technique of providing a smart home device for audience measurement ([0028], [0113], [0115], [0120]-[0121], Figs. 13-15, i.e., smart speaker collects audience measurement data). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing a smart home device for audience measurement as taught by Luff, to improve the audience detection system of Ramaswamy for the predictable result of utilizing a ubiquitous and affordable smart device to gather audience data that can also provide a variety of hands-free assistance controls.
The combination fails to explicitly disclose providing, via a network, an instruction to notify the user that the audio speech input is received.
Mandal teaches the technique of providing, via a network, an instruction to notify the user that the audio speech input is received ([0028], [0030], Fig. 1, i.e., output audio data 121 may correspond to confirmation that the voice command was received (e.g., “Here is a playlist of electronic dance music.”) or other responsive data). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing, via a network, an instruction to notify the user that the audio speech input is received as taught by Mandal, to improve the audience detection system of Ramaswamy for the predictable result of providing the user with assurance that the voice command was heard and understood.
Regarding claim 16, claim 16 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 2.
Regarding claim 17, claim 17 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 3.
Regarding claim 19, claim 19 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 5.
Claims 6, 13, 20 are rejected under 35 U.S.C. 103 as being unpatentable over by Ramaswamy in view of Luff, Mandal, and in further view of Conrad et al. (US Pub. 2015/0341692), herein referenced as Conrad.
Regarding claim 6, the combination fails to disclose “wherein correlating the audio speech input with media presented by the media presentation device comprises: correlating one or more timestamps associated with the audio speech input with one or more timestamps associated with signatures or watermarks that characterize the media presented by the media presentation device.”
Conrad teaches the technique of providing wherein correlating the audio speech input with media presented by the media presentation device comprises: correlating one or more timestamps associated with the audio speech input with one or more timestamps associated with signatures or watermarks that characterize the media presented by the media presentation device ([0012]-[0015], [0023], [0034], [0085], [0098], Figs. 1, 4-5, i.e., determining audience state or interest using passive sensor data and what portions the viewer watched and how intently the viewer watched those portions. For instance, determining a viewer is talking during particular portions of the media program, such as a movie with advertisements. Additionally, the portions of the media program may be marked as having a particular media type. For example, Incredible Family is both an adventure and a comedy program, with portions of the movie marked as having either of these media types).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein correlating the audio speech input with media presented by the media presentation device comprises: correlating one or more timestamps associated with the audio speech input with one or more timestamps associated with signatures or watermarks that characterize the media presented by the media presentation device as taught by Conrad, to improve the audience detection system of Ramaswamy for the predictable result of allowing advertisers and media providers to learn what kinds of shows people wish to watch ([0002]).
Regarding claim 13, claim 13 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 6.
Regarding claim 20, claim 20 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 6.
Allowable Subject Matter
Claims 4, 11, 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Q Huerta whose telephone number is (571)270-3582. The examiner can normally be reached M-F 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER Q HUERTA/Primary Examiner, Art Unit 2425 February 12, 2026