DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 23-24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Shintani (US 2022/0239867) in view of Enohara (US 2011/0205366).
Regarding claims 1, and 23-24, Shintani teaches An information processing method for estimating a characteristic of a user, by a computer, the information processing method comprising: acquiring first information indicative of a device operation (device information is determining which display to turn on 312/314 and 406/408 for inside environments 302/402 respectively) and a behavior (head movement, emotional status and proximity of user)of a target user to be estimated ([0049-0052]); acquiring second information indicative of presence or absence([0067]) of another user different from the target user in an environment where the target user is present (Figs. 4 shows embodiment where the system recognizes user 420 and 422, while executing the same method of tracking head movement, emotional state [0069], or proximity of user); extracting, based on the basis of the first information and the second information, first action information indicative of at least one of a first device operation (user action of being close to or looking at a respective display) and or a first behavior (behavior being the head movement, emotional status and proximity of user)of the target user in a first environment where the other user is absent (displaying image on display on 312/314 and 406/408 based on tracking user head movement, emotional state [0069], or proximity of user) and second action information indicative of at least one of a second device operation (user action of being close to or looking at a respective display)and or a second behavior (behavior being the head movement, emotional status and proximity of user)of the target user in a second environment where the other user is present (Figs. 4 shows embodiment where the system recognizes user 420 and 422, while executing the same method of tracking head movement, emotional state [0069], or proximity of user). Although Shintani teaches the limitations as discussed above he does not explicitly teach estimating a first characteristic that is a characteristic of the target user in the first environment based on the basis of the first action information, and a second characteristic that is a characteristic of the target user in the second environment based on the basis of the second action information; and outputting at least one of first characteristic information indicative of the first characteristic and or second characteristic information indicative of the second characteristic for providing a service to the target user based on the at least one of the first characteristic information or the second characteristic information.
However in the field of providing an operation to a user, Enohara teaches estimating a first characteristic (activity amount of an action) that is a characteristic of the target user in the first environment (a user being in the room) based on the basis of the first action information (action being standing, sitting, walking [0024-0027]), and a second characteristic that is a characteristic(a different activity amount of a different action) of the target user in the second environment (Fig. 5-7 shows multiple users in the room) based on the basis of the second action information (action being standing, sitting, walking [0024-0027]), and outputting at least one of first characteristic information indicative of the first characteristic or second characteristic information indicative of the second characteristic for providing a service to the target user based on the at least one of the first characteristic information or the second characteristic information (Fig. 4 S9 is the service or operating an air conditioner based on the previous steps S3-S8).
Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Shintani with the method of control of service output as taught by Enohara. This combination would provide a system capable of controlling electronic devices based on recognizes user intentions.
Allowable Subject Matter
Claims 2-23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim 2 is indicated allowable based on defining a relationships between one or more candidate traits and one or more feature groups indicative of features of a device operations or a behaviors of the target user is acquired, in a first case that the first device operation or the first behavior showing one or more first feature groups included in the one or more feature groups is included in the first action information, one or more first candidate traits associated with the one or more first feature groups are specified among the one or more candidate traits, and the specified one or more first candidate traits are estimated to be the first characteristic, and in a second case that the second device operation or the second behavior showing one or more second feature groups included in the one or more feature groups is included in the second action information, one or more second candidate traits associated with the one or more second feature groups are specified among the one or more candidate traits, and the specified one or more second candidate traits are estimated to be the second characteristic.
Claim 15, is indicated allowable acquiring third information indicative of a third behavior of the target user in a third environment where the target user is currently present; it is determining ,based on the basis of the third information, which of the first environment or the second environment a fourth is the third environment where the target user is currently present ; in a first case that the fourth third environment is the first environment, acquiring the first characteristic information, determining a first service to be performed to the target user based on the basis of the first characteristic information, and performing the first service; and in a second case that the fourth third environment is the second environment, acquiring the second characteristic information, determining a second service to be performed to the target user based on the basis of the second characteristic information, and performing the second service.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE L MATTHEWS whose telephone number is (571)270-5806. The examiner can normally be reached Mon-Fri 9:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDRE L MATTHEWS/ Primary Examiner, Art Unit 2621