DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 30, 2026 has been entered.
Response to Arguments
Applicants argue that the prior art cited fails to teach the claims as amended. Applicants’ arguments are persuasive, but are moot in view of new grounds of rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 9-10 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stottlemyer (PGPUB 2016/0269524) in view of Oni et al. (PGPUB 2017/0109849), hereinafter referenced as Oni and in further view of Naiga et al. (PGPUB 2017/0124594), hereinafter
Regarding claims 1, 10 and 16, Stottlemyer discloses a smart sensor, system and method, hereinafter referenced as a system, comprising:
a computer processor (fig. 1, element 106);
a microphone (figure 1, element 116); and
a memory storing a voice control module (figure 1, element 108), wherein the voice control module is configured to:
resolve a first voice command based on a first user input received by the microphone (receive an electrical signal of the spoken audio received via the microphone; p. 0057);
determine a first identity of a first registered user (identify voice profile 174D) of a plurality of registered users based on the first voice command (identify the speaker using the voice profiles of 174A-D representing a plurality of users; p.0036-0037),
wherein the resolving the first voice command includes determining a first instruction (process the command, for example, call mom; p. 0036-0037); and
access first user data associated with the first registered user on a first user device associated with the first registered user based on the determining the first identity of the first registered user (identify the speaker using the voice profiles of 174A-D representing a plurality of users; p.0036-0037),
wherein the smart sensor is configured to carry out the first instruction based on the first user data (process the command, for example, call mom; p. 0036-0037), but wherein the resolving the second voice command includes determining a second instruction based on a second series of natural language questions and answers that is generated by the cloud-based cognitive computing system and personalized to the second registered user and providing a system that performs tone analysis, speech rate analysis and speech volume analysis on voice commands received from user.
Oni discloses a system wherein resolving the second voice command (voice recognition; p. 0052) includes determining a second instruction based on a second series of natural language questions and answers that is generated by the cloud-based cognitive computing system and personalized to the second registered user (series of questions and answers; p. 0080, 0055-0056), to output personalized, modifiable executable pathway options.
Therefore, it would have been obvious to one of ordinary skill of the art, before the effective filing date of the claimed invention, to modify the method as described above, to provide improved personalized pathways.
Naiga discloses a system comprising providing a system that performs tone analysis (user’s tone of voice), speech rate analysis (length of intervals between speaking) and speech volume analysis (user’s volume of voice) on voice commands received from user (p. 0030), to determine relevant data.
Therefore, it would have been obvious to one of ordinary skill of the art, before the effective filing date of the claimed invention, to modify the method as described above, for providing customized data.
Regarding claim 9, Stottlemyer discloses a system wherein the memory stores a cognitive module that is configured to perform a cognitive analysis based on the first voice command; and
wherein the carrying out the first instruction is performed based on the cognitive analysis (voice recognition techniques; p. 0023, 0036-0037).
Claim(s) 2, 4-8, 11, 13-15 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stottlemyer in view of Oni and Naida and in further view of Shimy et al. (PGPUB 2011/0069940), hereinafter referenced as Shimy.
Regarding claims 2, 11 and 17, it is interpreted and rejected for similar reasons as set forth above. In addition, Stottlmyer teaches a module configured to:
resolve a second voice command based on a second user input received by the microphone (receive a command for the occupant of device 152A of figure 2A; p. 0027-0028 and 0036-0037); and
determine a second identity of a second registered user from the plurality of registered users based on the second voice command (identify the speaker using the voice profiles of 174A-D representing a plurality of users; p.0036-0037),
wherein the resolving the second voice command includes determining a second instruction (process the command; p. 0036-0037); and
access second user data associated with the second registered user on a second user device associated with the second registered user based on the determining the second identity of the second registered user (identify the speaker using the voice profiles of 174A-D representing a plurality of users; p.0036-0037),
wherein the smart sensor is configured to carry out the second instruction based on the second user data (process the command; p. 0036-0037), but does not specifically teach wherein the first user data comprises a first subscription to a first tier of services, a second user data comprises a second subscription to a second tier of services, and the first tier of services and the second tier of services are different from each other.
Shimy discloses a system wherein the first user data comprises a first subscription to a first tier of services (adult subscription), a second user data comprises a second subscription to a second tier of services (child subscription), and the first tier of services and the second tier of services are different from each other (p. 0129, 0164), to tailor output data.
Therefore, it would have been obvious to one of ordinary skill of the art, before the effective filing date of the claimed invention, to modify the method as described above, to create a personalized experience.
Regarding claims 4, 13 and 20, it is interpreted and rejected for similar reasons as set forth above. In addition, Shimy discloses a system wherein the first tier of services comprise credentials for a first streaming service and the second tier of services comprise credentials for a second streaming service (p. 0033-0037).
Regarding claim 5, it is interpreted and rejected for similar reasons as set forth above. In addition, Shimy discloses a system wherein the second instruction is same as the first instruction, and the first instruction and the second instruction are carried out simultaneously at different locations (p. 0139).
Regarding claim 6, it is interpreted and rejected for similar reasons as set forth above. In addition, Shimy discloses a system wherein a location of the first user is detected using a motion sensor (tracking the movement of users; p. 0088-0092).
Regarding claims 7 and 18, it is interpreted and rejected for similar reasons as set forth above. In addition, Stottlemyer discloses a system wherein the memory further stores a peer interaction module that is configured to communicate with an additional smart sensor via wireless communication in a mesh network (connect smart devices to create a single network, such as Zigbee; p. 0020-0027).
Regarding claims 8 and 19, it is interpreted and rejected for similar reasons as set forth above. In addition, Stottlemyer discloses a system wherein the first instruction comprises playing a first audio file at the smart sensor based on the first user data, and the second instruction comprises playing a second audio file at the additional smart sensor based on the second user data (provide one more output; p. 0022-0023, 0036-0037).
Regarding claim 14, it is interpreted and rejected for similar reasons as set forth above in the combination of claims 5 and 6.
Regarding claim 15, it is interpreted and rejected for similar reasons as set forth above. In addition, Stottlemyer discloses a system wherein the first instruction comprises playing a first audio file the smart sensor based on the first user data, and the second instruction comprises playing a second audio file at the additional smart sensor based on the second user data (provide one more output; p. 0022-0023, 0036-0037).
Claim(s) 3 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stottlemyer in view of Oni, Naiga and Shimy and in further view of Toiyama (PGPUB 2017/0069321).
Regarding claims 3 and 12, Stottlemyer in view of Oni Naiga and Shimy disclose a system as described above, but does not specifically teach a system wherein the second instruction is different from the first instruction, and the first instruction and the second instruction are carried out simultaneously.
Toiyama discloses a system wherein the second instruction is different from the first instruction, and the first instruction and the second instruction are carried out simultaneously (plurality of users simultaneously utter voice commands to microphones and perform a plurality of processes; p. 0045-0057, 0066, 0070-0072), to allow flexibility.
Therefore, it would have been obvious to one of ordinary skill of the art, before the effective filing date of the claimed invention, to modify the method as described above, to provide a system that is tailored to a plurality of user’s needs.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. This information has been detailed in the PTO 892 attached (Notice of References Cited).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAKIEDA R JACKSON whose telephone number is (571)272-7619. The examiner can normally be reached Mon - Fri 6:30a-2:30p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on 571.272.5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAKIEDA R JACKSON/Primary Examiner, Art Unit 2657