DETAILED ACTION
This is in response to applicant's communication filed on 09/22/2025, wherein:
Claim 1-17 are pending.
Claim 1, 5, and 11 are amended.
Response to Arguments
Applicant’s arguments with respect to pending claim have been considered but are moot because the newly amended limitation was addressed based on new ground of rejection.
Applicant amendment has overcome 35 USC § 112 rejection issued on 04/23/2025.
(1) On page 1-3 of Applicant response, Applicant argues that Barat failed to anticipated the claimed invention because of the following reasons:
(a) Fundamentally, it appears that the invention as claimed and Barat are directed to different things. Barat is directed to picking a user device from a plurality of user devices that is the best device to deliver a received message. While Barat does take into account the environment, that is the environment of the device, and this is done so that the system can determine if a device can usefully deliver the notification. See Barat, paragraph 68. It should also be noted that sometimes the system will not deliver the notification but will simply automatically respond or even delete the message.
By contrast, the invention as claimed is directed to generating a plan to perform a determined action according to an engagement method and then executing the plan. There is only one digital assistant, which performs the actions of the claim, and the digital assistant is looking to get the user to do something, not merely deliver a message. In this regard, the digital assistant wants to cause the user to perform a function, e.g., a function that the digital assistant thinks that the user should presently perform.
(b) As for the particular language of the claims, there is no determined action taught or suggested in Barat. Rather, there is only the predetermined action to route the message to a device best suited to deliver the message that was received. With regard to the engagement method, to advance prosecution, the engagement method has been defined as being a method for communicating with the user of the digital assistant that has a predefined interruption intensity with respect to the user.
(c) Barat also does not determine a plan for executing the determined action based on at least the selected engagement method, given that Barat does not teach an engagement method as called for in the claim and also, as indicated above, goes through a fixed set of steps to determine where to best deliver the message if it is to be delivered at all. The latter is not generating a customized a plan as called for in the claim. The claim lastly calls for executing the generated plan by employing an input/output (I/O) device on which the digital assistant is executing. However, given that the point of Barat is to deliver the message to a different device than the one on which it is received, Barat appears to be teaching away from this claim element. Furthermore, as best as can be understood, the method of Barat appears to be executed by server 203, which is not a device that will output the message to the user.
(2) On page 4-5 of Applicant response, Applicant remarks about 35 USC § 103 rejection are based on the same ground as indicated in (1).
Applicant arguments have been carefully considered. However, Examiner respectfully disagrees.
(1)(a) Applicant remarks in this section discussed about the concept disclosed in Barat teaching. However, Applicant does not particularly point out the difference between Barat reference and the current claim. Examiner has clearly addressed each limitation of claim 1 by Barat teaching.
(1)(b) Applicant’s remark on interruption intensity is related to newly amended limitation. The newly amended details was also addressed by Barat reference: wherein each engagement method is a method for communicating with the user of the digital assistant that has a predefined interruption intensity with respect to the user (Fig. 1, Fig. 10-11, Fig. 14 and ¶0068 – 0069 disclose engagement method including response, augment notification and routing to other devices – which indicates that each of engagement method – i.e. response, augment notification or routing to other device – has different level of interruption to the user – i.e. predefined level of interruption intensity).
(1)(c) Applicant’s remarks that the claim lastly calls for executing the generated plan by employing an input/output (I/O) device on which the digital assistant is executing. Examiner has indicated that Barat teaching does address the limitation: generating a customized plan (i.e. how to provide notification) for executing the determined action (i.e. notification handling) based on at least the selected engagement method (i.e. response, augment notification, and routing to other device are having different level of engagement); and executing the generated plan by employing an input/output (I/O) device on which the digital assistant is executing (i.e. providing notification) (¶Fig. 1, Fig. 10-11, Fig. 14 and ¶0068 – 0069 “For example, ambient audio information may be an indicator that the user is in a high noise environment while calendar information obtained as other inputs at input block 1403 may indicate that the user is at an outdoor concert. In that case, sending a certain type of notification to the user's mobile telephone at the concert would not be useful. Thus, if the environment is discernible in decision block 1415, then the notification handling system will decide whether to provide the notification at the current device only as shown in decision block 1419, that is, to display the notification and not route the notification any further”; ¶0069 discloses routing notification to other device; which are different customized plan based on selected engagement method).
The current scope of the claim does not clearly define each of the element “an action”, “engagement method”, “predefined interruption intensity”, “customized plan for executing the determined action”. Examiner has interpreted the claim based on the broadest reasonable interpretation in light of specification (MPEP §2111) as a method comprising the step of determining to provide notification (i.e. action) based on user and environment data, selecting method of providing notification wherein each method has different level of interruption (i.e. selecting engagement method and generating customized plan), and executing the customized plan using input/output of the digital assistant device. Based on the interpretation, Barat reference has anticipated the current scope of the claimed invention. For further details, please refer to “Claim Rejections - 35 USC § 102” section of this office action.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim 1, 5-6, 10-11, 15, and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Barat et al. (US 20140280578 A1).
Regarding claim 1, Barat discloses a method for interacting by a digital assistant with a user of the digital assistant (Abstract, Fig. 1-2, Fig. 11-12, Fig. 14), comprising:
determining by the digital assistant an action to be executed by the digital assistant, wherein the action is determined based on at least a sensed current state of the user and a sensed current state of an environment near the user (Fig. 1, Fig. 11-12, Fig. 14 disclose determining an action for notification handling such as routing and response action by notification handling system – i.e. personal digital assistant – wherein the action is determined based on device environment – such as light sensor, audio sensor, temperature sensor in ¶0022- and user state – such as user history in ¶0030, user running/walking in ¶0022, and Fig. 10-14);
selecting an engagement method from a plurality of engagement methods based on the current state of the user, the current state of an environment near the user, and the selected action (Fig. 1, Fig. 10-11, Fig. 14 and ¶0068 – 0069 disclose determining whether to response, augment notification and routing to other devices – i.e. select different engagement method); wherein each engagement method is a method for communicating with the user of the digital assistant that has a predefined interruption intensity with respect to the user (Fig. 1, Fig. 10-11, Fig. 14 and ¶0068 – 0069 disclose engagement method including response, augment notification and routing to other devices – which indicates that each of engagement method – i.e. response, augment notification or routing to other device – has different level of interruption to the user – i.e. predefined level of interruption intensity);
generating a customized plan for executing the determined action based on at least the selected engagement method; and executing the generated plan by employing an input/output (I/O) device on which the digital assistant is executing (¶Fig. 1, Fig. 10-11, Fig. 14 and ¶0068 – 0069 “For example, ambient audio information may be an indicator that the user is in a high noise environment while calendar information obtained as other inputs at input block 1403 may indicate that the user is at an outdoor concert. In that case, sending a certain type of notification to the user's mobile telephone at the concert would not be useful. Thus, if the environment is discernible in decision block 1415, then the notification handling system will decide whether to provide the notification at the current device only as shown in decision block 1419, that is, to display the notification and not route the notification any further”; ¶0069 discloses routing notification to other device; which are different customized plan based on selected engagement method).
Regarding claim 5, Barat discloses the method of claim 1, wherein the selected engagement method is one of: intrusive proactive (Fig. 14 step 1419-1427, ¶0068, and ¶0071 disclose providing notification to more than one device – i.e. intrusive proactive), direct proactive gateway, indirect proactive gateway, categorical proactive gateway, contextual proactive gateway, subtle proactive, and silent proactive.
Regarding claim 6, the scope and content of the claim recites a system for performing the method of claim 1, therefore, being addressed as in claim 1.
Regarding claim 10, the scope and content of the claim recites a system for performing the method of claim 5, therefore, being addressed as in claim 5.
Regarding claim 11, Barat discloses a method performed by an input/output (I/O) device having a digital assistant (Fig. 5), at least one sensor (Fig. 5 – sensor 519), and at least one resource (Fig. 5 – sensor hub 517), the method comprising:
determining by the digital assistant an action to be executed by the I/O device, wherein the action is determined based on at least a sensed current state of a user of the I/O device and a current state of an environment near the user, wherein the at least one sensor is used to sense information upon which is based at least one of the current state of the user and the current state of the environment near the user (Fig. 1, Fig. 11-12, Fig. 14 disclose determining an action for notification handling such as routing and response action by notification handling system – i.e. personal digital assistant – wherein the action is determined based on device environment – such as light sensor, audio sensor, temperature sensor in ¶0022- and user state – such as user history in ¶0030, user running/walking in ¶0022, and Fig. 10-14);
selecting an engagement method from a plurality of engagement methods based on the current state of the user, the current state of an environment near the user, and the determined action (Fig. 1, Fig. 10-11, Fig. 14 and ¶0068 – 0069 disclose determining whether to response, augment notification and routing to other devices – i.e. select different engagement method); wherein each engagement method is a method for communicating with the user of the digital assistant that has a predefined interruption intensity with respect to the user (Fig. 1, Fig. 10-11, Fig. 14 and ¶0068 – 0069 disclose engagement method including response, augment notification and routing to other devices – which indicates that each of engagement method – i.e. response, augment notification or routing to other device – has different level of interruption to the user – i.e. predefined level of interruption intensity);
generating a customized plan for executing the determined action based on at least the selected engagement method; and executing the generated plan by at least operation of at least one of the at least one resource (¶Fig. 1, Fig. 10-11, Fig. 14 and ¶0068 – 0069 “For example, ambient audio information may be an indicator that the user is in a high noise environment while calendar information obtained as other inputs at input block 1403 may indicate that the user is at an outdoor concert. In that case, sending a certain type of notification to the user's mobile telephone at the concert would not be useful. Thus, if the environment is discernible in decision block 1415, then the notification handling system will decide whether to provide the notification at the current device only as shown in decision block 1419, that is, to display the notification and not route the notification any further”; ¶0069 discloses routing notification to other device; which are different customized plan based on selected engagement method).
Regarding claim 15, Barat discloses the method of claim 11, wherein the selected engagement method is one of: intrusive proactive (Fig. 14 step 1419-1427, ¶0068, and ¶0071 disclose providing notification to more than one device – i.e. intrusive proactive), direct proactive gateway, indirect proactive gateway, categorical proactive gateway, contextual proactive gateway, subtle proactive, and silent proactive.
Regarding claim 17, Barat discloses the method of claim 11, wherein the at least one sensor is part of the at least one resource (Fig. 5 – sensor hub 517).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2-3, 7-8, and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Barat et al. (US 20140280578 A1) in view of Otebolaku et al. (”User context recognition using smartphone sensors and classification models”, available online 03/19/2016 in Journal of Network and Computer Applications).
Regarding claim 2, Barat discloses the method of claim 1, wherein determining the action further comprises: collecting a dataset related to the user, the dataset including a least one piece of data sensed by at least one sensor coupled to the digital assistant (Fig. 5, ¶0022, Fig. 11, ¶0063 discloses collecting data using device sensor 1101). However, the reference is silent on details about applying a machine learning model trained to determine a current state based on the collected dataset, wherein the current state is the state of the user and the state of the environment near the user in real-time or near real-time.
Otebolaku discloses applying a machine learning model trained to determine a current state based on the collected dataset, wherein the current state is the state of the user and the state of the environment near the user in real-time or near real-time (section 3-5 disclose using machine learning to determine mobile device context – i.e. current state of user which is real-time or near real-time).
Therefore, it would have been obvious to one having ordinary skill in the art, before effective filing date of the claimed the invention, to modify the invention of Barat to incorporate machine learning for context determination from Otebolaku because doing so would make use of known technique to improve similar devices (methods, or products) in the same way (MPEP §2141 -III) to utilize modern technique for processing complex dataset for context determination.
Regarding claim 3, Barat and Otebolaku discloses the method of claim 2, wherein collected dataset includes: real-time data related to a user obtained via the at least one sensor (Barat - ¶0022 – “The sensor data could be an accelerometer, a gyroscope, a light level sensor, a temperature sensor, or an audio sensor or some other type of sensor. Other types of environments may include being in motion due to the user running or walking, being in motion in a vehicle, being in motion on a train, or being in a high-noise level environment”) and historical data related to past activity of the user (Barat - ¶0030 – “The user behavior provides information about how a user has responded to notifications of a given type and having a given notification content, origin and priority in the past. The notification handling system selects one or more associated devices capable of enabling the user to respond to the new notifications accordingly to past assessed user behavior”).
Regarding claim 7, the scope and content of the claim recites a system for performing the method of claim 2, therefore, being addressed as in claim 2.
Regarding claim 8, the scope and content of the claim recites a system for performing the method of claim 3, therefore, being addressed as in claim 3.
Regarding claim 12, Barat discloses the method of claim 11, wherein determining the action further comprises: collecting a dataset related to the user, the dataset including a least one piece of data sensed by at least one sensor coupled to the digital assistant However, the reference is silent on details about applying a machine learning model trained to determine a current state based on the collected dataset, wherein the current state is the state of the user and the state of the environment near the user in real-time or near real-time.
Otebolaku discloses applying a machine learning model trained to determine a current state based on the collected dataset, wherein the current state is the state of the user and the state of the environment near the user in real-time or near real-time (section 3-5 disclose using machine learning to determine mobile device context – i.e. current state of user which is real-time or near real-time).
Therefore, it would have been obvious to one having ordinary skill in the art, before effective filing date of the claimed the invention, to modify the invention of Barat to incorporate machine learning for context determination from Otebolaku because doing so would make use of known technique to improve similar devices (methods, or products) in the same way (MPEP §2141 -III) to utilize modern technique for processing complex dataset for context determination.
Regarding claim 13, Barat and Otebolaku discloses the method of claim 12, wherein collected dataset includes: real-time data related to a user obtained via the at least one sensor (Barat - ¶0022 – “The sensor data could be an accelerometer, a gyroscope, a light level sensor, a temperature sensor, or an audio sensor or some other type of sensor. Other types of environments may include being in motion due to the user running or walking, being in motion in a vehicle, being in motion on a train, or being in a high-noise level environment”) and historical data related to past activity of the user (Barat - ¶0030 – “The user behavior provides information about how a user has responded to notifications of a given type and having a given notification content, origin and priority in the past. The notification handling system selects one or more associated devices capable of enabling the user to respond to the new notifications accordingly to past assessed user behavior”).
Claim 4, 9, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Barat et al. (US 20140280578 A1) in view of Otebolaku et al. (”User context recognition using smartphone sensors and classification models”, available online 03/19/2016 in Journal of Network and Computer Applications) and Shuster et al. (US 20160255466 A1).
Regarding claim 4, Barat and Otebolaku discloses the method of claim 3, however, silent on further details about wherein at least one piece of historical data related to the user is obtained from at least one source external to the digital assistant.
Shuster discloses wherein at least one piece of historical data related to the user is obtained from at least one source external to the digital assistant (¶0067 – historical data from private data system such as social media, medical and immunization, insurance data…).
Therefore, it would have been obvious to one having ordinary skill in the art, before effective filing date of the claimed the invention, to modify the invention of Barat and Otebolaku to provide access to external source data from Shuster because doing so would make use of known technique to improve similar devices (methods, or products) in the same way (MPEP §2141 -III) to provide more comprehensive context information.
Regarding claim 9, the scope and content of the claim recites a system for performing the method of claim 4, therefore, being addressed as in claim 4.
Regarding claim 14, Barat and Otebolaku discloses the method of claim 13, however, silent on further details about wherein at least one piece of historical data related to the user is obtained from at least one source external to the digital assistant.
Shuster discloses wherein at least one piece of historical data related to the user is obtained from at least one source external to the digital assistant (¶0067 – historical data from private data system such as social media, medical and immunization, insurance data…).
Therefore, it would have been obvious to one having ordinary skill in the art, before effective filing date of the claimed the invention, to modify the invention of Barat and Otebolaku to provide access to external source data from Shuster because doing so would make use of known technique to improve similar devices (methods, or products) in the same way (MPEP §2141 -III) to provide more comprehensive context information.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Barat et al. (US 20140280578 A1) in view of Bell (US 20110298614 A1).
Regarding claim 16, Barat discloses the method of claim 11, however, silent on further details of claim 16.
Bells discloses wherein the at least one sensor is a virtual sensor (¶0062, ¶0071 disclose obtaining data through external source such as weather data – i.e. virtual sensor).
Therefore, it would have been obvious to one having ordinary skill in the art, before effective filing date of the claimed the invention, to modify the invention of Barat to incorporate weather data for triggering notification from Bells because doing so would apply a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP §2141 -III) to utilize various source of information for context information.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUNG HONG whose telephone number is (571)270-7928. The examiner can normally be reached on Monday-Friday from 8:00 am to 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, JINSONG HU, can be reached on (571) 272-3965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/DUNG HONG/
Primary Examiner, Art Unit 2643