DETAILED ACTION
This action is made in response to the amendments/remarks filed on November 6, 2025. This action is made final.
Claims 1-20 are pending. Claims 1, 2, 5, 11, 12, and 15 have been amended. Claims 1 and 11 are independent claims.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s argument with respect to the previous 103 rejection is moot in light of the new grounds of rejection.
Applicant’s argument with respect to the previous 101 rejection has been fully considered but is not persuasive. Applicant argues the claims, as amended, go beyond a method of organizing human are directed to an improvement to a computer functionality of computer-related technology and provide a practical application. However, the examiner respectfully disagrees.
MPEP 2106. 04(a)(2)(II) states that a claimed invention is directed to certain methods of organizing human activity if the identified claim elements contain limitations that encompass fundamental economic principles or practices, commercial or legal interactions, or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The Examiner submits that the identified claim elements represent a series of rules or instructions that a person or persons, with or without the aid of a computer, would follow to grant or deny another person access to their personal/private/secure information. But for the recitation of the claims being implemented on a computer, the claims are akin to a user granting/denying another person access to their home or a teller granting/denying access to a secure bank vault and, therefore, is a method of organizing human activity.
Furthermore, MPEP 2106.04(d)(1) states that a practical application may be present where the claimed invention improves the functioning of a computer or improves another technology. See also MPEP 2106.05(a)(I). The technological environment of Applicant’s claim is a general-purpose computer (see Fig. 1). Applicant has not identified nor can the Examiner locate any physical improvement to the functioning of the computer that results from the implementation of Applicant’s claim nor does it recite “another technology” for which an improvement is realized. There is no indication that the computer (or another technology) is made to run faster, more efficiently, or utilize less power, or otherwise improved upon. Furthermore, insomuch as a distributed ledger system is recited, the additional element is recited at a high level of generality and amounts to generally linking the use of the idea to a particular technology or field of use, but is not recited as an improvement thereon. Because there is no improvement to the functioning of the computer or another technology, a practical application is not present.
Accordingly, the previous 101 rejection is maintained.
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the “providing, by the at least one computing device, an interactive communication session between the virtual representation of the person and the at least one other computer device; receiving, by the at least one computing device from the at least one other computing device, during the interactive communication session, a request to access at least some of the secured information; accessing, by the at least one computing device, at least one permission provided by the person; determining, by the at least one computing device as a function of the permission, that the at least one other computing device does not have authorization to access the at least some of the secured information; to determine whether the at least one other computing device has authorization to access the secured information; where the at least one other computing device does not have authorization to access the secured information; providing to the person, by the at least one computing device via the virtual representation of the person, the request to access the at least some of the secured information; receiving, by the at least one computing device via the virtual representation of the person, authorization from the person for the at least one other computing device to access the at least some of the secured information; and automatically providing, by the at least one computing device in response to receiving the authorization, the at least one other computing device with access to at least some of the secured information, via the virtual representation of the person, during the interactive communication session” must be shown or the feature(s) canceled from the claim(s). No new matter should be entered.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1-10 recite a method providing access to user data, which is within the statutory category of a process. Claims 11-20 recite a system for providing access to user data, which is within the statutory class of a machine.
Claims are eligible for patent protection under § 101 if they are in one of the four statutory categories and not directed to a judicial exception to patentability. Alice Corp. v. CLS Bank Int'l, 573 U.S. ___ (2014). Claims 1-20, each considered as a whole and as an ordered combination, are directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
MPEP 2106 Step 2A – Prong 1:
The bolded limitations of:
Claims 1 and 11 (claim 1 being representative)
receiving, by at least one computing device from each of a respective data collection sources, information associated with a person; securing, by the at least one computing device, at least some of the received information at least as a function of at least one token to provide secured information; storing, on a distributed ledger system, the secured information; providing, by the at least one computing device via a virtual reality and/or augmented reality (“VR/AR”) environment, a virtual representation of the person, wherein the virtual representation of the person has access to the secured information as a function of the distributed ledger system; providing, by the at least one computing device, an interactive communication session between the virtual representation of the person and the at least one other computer device; receiving, by the at least one computing device from the at least one other computing device, during the interactive communication session, a request to access at least some of the secured information; accessing, by the at least one computing device, at least one permission provided by the person; determining, by the at least one computing device as a function of the permission, that the at least one other computing device does not have authorization to access the at least some of the secured information; to determine whether the at least one other computing device has authorization to access the secured information; where the at least one other computing device does not have authorization to access the secured information; providing to the person, by the at least one computing device via the virtual representation of the person, the request to access the at least some of the secured information; receiving, by the at least one computing device via the virtual representation of the person, authorization from the person for the at least one other computing device to access the at least some of the secured information; and automatically providing, by the at least one computing device in response to receiving the authorization, the at least one other computing device with access to at least some of the secured information, via the virtual representation of the person, during the interactive communication session.
as presently drafted, under the broadest reasonable interpretation, covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for the recitation of generic computer components. For example, but for the noted computer elements, the claim encompasses a person following rules or instructions to grant or deny access to their secured data to another user in the manner described in the abstract idea. The examiner further notes that “methods of organizing human activity” includes a person’s interaction with a computer (see October 2019 Update: Subject Matter Eligibility at Pg. 5). If the claim limitation, under its broadest reasonable interpretation, covers managing persona behavior or interactions between people but for the recitation of generic computer components, then it falls within the “method of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
MPEP 2106 Step 2A – Prong 2:
This judicial exception is not integrated into a practical application because there are no meaningful limitations that transform the exception into a patent eligible application. The additional elements merely amount to instructions to apply the exception using generic computer components (“readable media”; “at least one computing device”; “at least one other computing device”—all recited at a high level of generality). Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea.
The “distributed ledger system”, “virtual reality and/or augmented reality environment”, and “virtual representation of the person” are not a generic computer component; however they recited at a high levels of generality and similarly amount to generally linking the abstract idea to a particular technological environment. (See MPEP 2106.04(d)(1) indicating generally linking an abstract idea to a particular technological environment does not amount to integrating the abstract idea into a practical application).
The claims only manipulate abstract data elements into another form. They do not set forth improvements to another technological field or the functioning of the computer itself and instead use computer elements as tools in a conventional way to improve the functioning of the abstract idea identified above. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. None of the additional elements recited "offers a meaningful limitation beyond generally linking 'the use of the [method] to a particular technological environment,' that is, implementation via computers." Alice Corp., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)).
At the levels of abstraction described above, the claims do not readily lend themselves to a finding that they are directed to a nonabstract idea. Therefore, the analysis proceeds to step 2B. See BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1349 (Fed. Cir. 2016) ("The Enfish claims, understood in light of their specific limitations, were unambiguously directed to an improvement in computer capabilities. Here, in contrast, the claims and their specific limitations do not readily lend themselves to a step-one finding that they are directed to a nonabstract idea. We therefore defer our consideration of the specific claim limitations’ narrowing effect for step two.") (citations omitted).
MPEP 2106 Step 2B:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the same reasons as presented in Step 2A Prong 2. Moreover, the additional elements recited are known and conventional generic computing elements (“readable media”; “at least one computing device”; “at least one other computing device”—see Specification Fig. 1, [0057], [0059] describing the various components as general purpose, common, standard, known to one of ordinary skill, and at a high level of generality, and in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy the statutory disclosure requirements). Therefore, these additional elements amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept that amounts to significantly more. See MPEP 2106.05(f).
The Federal Circuit has recognized that "an invocation of already-available computers that are not themselves plausibly asserted to be an advance, for use in carrying out improved mathematical calculations, amounts to a recitation of what is 'well-understood, routine, [and] conventional.'" SAP Am., Inc. v. InvestPic, LLC, 890 F.3d 1016, 1023 (Fed. Cir. 2018) (alteration in original) (citing Mayo v. Prometheus, 566 U.S. 66, 73 (2012)). Apart from the instructions to implement the abstract idea, they only serve to perform well-understood functions (e.g., receiving, translating, and displaying data—see Specification above as well as Alice Corp.; Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307 (Fed. Cir. 2016); and Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334 (Fed. Cir. 2015) covering the well-known nature of these computer functions).
Furthermore, as discussed above, the additional element of the “distributed ledger system”, “virtual reality and/or augmented reality environment”, and “virtual representation of the person” are recited at high levels of generality and were determined to generally link the abstract idea into a particular technological environment or field of use. This additional element have been re-evaluated under step 2B and have also been found insufficient to provide significantly more. (See MPEP 2106.05(A) indicating generally linking an abstract idea to a particular technological environment does not amount to significantly more). Furthermore, the Background section of Applicant’s Specification (e.g., see [0024], [0026]) indicates that virtual reality environment is well-understood, routing, and conventional in the field. (See MPEP 2106.05(I)(A) indicating that well-understood, routine, and conventional activities cannot provide significantly more)
Dependent Claims
The limitations of dependent but for those addressed below merely set forth further refinements of the abstract idea without changing the analysis already presented. Claims 2, 5 (12, 15) merely providing a token for accessing the data and aggregating data from a plurality of sources, claims 9 (19) describes the type of user data, which covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions). Claims 3, 4 (13, 14) include the additional element of the virtual representation of the person and/or another person that is analyzed in the same manner as the virtual representations of the independent claims and which does not provide a practical application or amounts to significantly more for the same reasons detailed above.
Claims 6-8 (16-18) further refine the abstract idea described in the independent claim and further recites the additional elements of (1) machine learning and Artificial Intelligence (AI) and (2) use of the machine learning and AI. When given the broadest reasonable interpretation in light of the nonexistent description of AI training in the disclosure, training of an AI model with the noted data amounts to a mathematical concept that creates data associations. As such, this training of the AI engine is interpreted to be subsumed within the identified abstract idea and the use of the trained model provides nothing more than mere instructions to implement the abstract idea, supra. July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, discussion of item (c) at Pgs. 7-9. Regarding (2), the use of the AI engine provides nothing more than mere instructions to implement an abstract idea on a generic computer (“apply it”) and does not provide a practical application nor does it amount to significantly more. See MPEP 2106.05(f) and 2106.05(A).; July 2024 Subject Matter Eligibility Examples, Example 47, Claim 2, discussion of items (d) and (e) at Pgs. 8-9.
Claims 10 (20) further refine the abstract idea described in the independent claim and further recites the use of voice recognition technology, which amounts to “generally linking” and which does not provide a practical application or amounts to significantly more for the same reasons detailed above
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 and 11-15 is/are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Campbell et al. (USPPN: 2022/0255763; hereinafter Campbell) in further view of Hastings et al. (USPPN: 2015/0254793; hereinafter Hastings).
As to claim 1, Campbell teaches A computer-implemented method for securely collecting, generating, and provisioning information (e.g., see Title, Abstract), the method comprising:
receiving, by at least one computing device from each of a respective data collection sources, information associated with a person (e.g., see Figs. 1, 5, [0022], [0043], [0071], [0119] teaching a system for receiving user data from a plurality of different sources);
securing, by the at least one computing device, at least some of the received information at least as a function of at least one token to provide secured information (e.g., see [0023], [0025], [0032]-[0033], [0041], [0058] teaching encrypting the data via keys);
storing, on a distributed ledger system, the secured information (e.g., see [0026], [0029], [0041], [0046]-[0050] storing the secured information on a distributed ledger);
providing, by the at least one computing device via a virtual reality and/or augmented reality (“VR/AR”) environment, [a virtual representation of the person, wherein the virtual representation] has access to the secured information as a function of the distributed ledger system (e.g., see Fig. 8, [0023], [0026],[0080], [0153] wherein the distributed ledger system provides user access to their data and permits a user to manage what other users can access some or all of the data, wherein the system can be provided as a virtual or augmented reality);
receiving, by the at least one computing device from the at least one other computing device, a request to access the secured information (e.g., see Figs. 3, 6, 7, [0056], [0058] wherein a third party can request access to the secured information);
accessing, by the at least one computing device, at least one permission provided by the person (e.g., see [0047], [0056] wherein permissions set by the user are determined for the third party requesting access);
determining, by the at least one computing device as a function of the permission, that the at least one other computing device does not have authorization to access the at least some of the secured information (e.g., see [0023], [0056], [0058] wherein it is determined whether the third party has been granted access and/or access has not been removed for the third party, wherein access can be granted/denied to some or all of the data);
providing to the person, by the at least one computing device, the request to access the at least some of the secured information (e.g., see [0056], [0058] wherein a user can select those they would like to grant permission to access their data); and
automatically providing, by the at least one computing device, in response to receiving the authorization, the at least one other computing device with access to at least some of the secured information (e.g., see [0056]-[0059] wherein the permissions for data access can be continuously updated so that a third party is granted/denied access automatically).
While Campbell teaches determining whether or not a third party has permission to access the secured data and automatically providing access to the secured information when a third party has permission/authorization to do so wherein the list of authorized parties is continuously updated so that a can select a third party to either grant or deny access their secured data automatically and further teaches the system may be implemented in an augmented, mixed, or virtual reality environment (e.g., see [0080]), Campbell fails to teach providing, a virtual representation of the person; providing, by the at least one computing device, an interactive communication session between the virtual representation of the person and at least one other computing device; receiving, during the interactive communication session, a request to access; providing to the person, via the virtual representation of the person, the request to access; receiving, by the at least one computing device via the virtual representation of the person, authorization from the person for the at least one other computer device to access the at least some of the secured information; automatically providing, via the virtual representation of the person, during the interactive communication session.
However, in the same field of endeavor of data access, Hastings teaches providing, a virtual representation of the person (e.g., see Figs. 10, 13, [0132], [0144] teaching a virtual reality environment, wherein each user can be represented as a virtual avatar); providing, by the at least one computing device, an interactive communication session between the virtual representation of the person and at least one other computing device (e.g., see Fig. 12B, [0130] wherein the users can interact with one another in the virtual reality environment); receiving, during the interactive communication session, a request to access (e.g., see Fig. 12B, [0130] wherein, during a communication session, a user can make a request to another user to access their data); providing to the person, via the virtual representation of the person, the request to access (e.g., see Fig. 12B, [0129], [0130] wherein a user, through use of their avatar, can make the request to another user’s avatar); receiving, by the at least one computing device via the virtual representation of the person, authorization from the person for the at least one other computer device to access the at least some of the secured information (e.g., see Fig. 12B, [0129], [0130] wherein a user, through their avatar provides access to the data); providing, via the virtual representation of the person, during the interactive communication session (e.g., see Fig. 12B, [0130] wherein the data is granted during the communication session through the user’s avatar).
Accordingly, it would have been obvious before the effective filing date of the claimed invention to modify Campbell in view of Hastings with a reasonable expectation of success. One would have been motivated to make the modification to provide for a more engaging user environment thus improving user experience.
As to claim 2, the rejection of claim 1 is incorporated. Campbell further teaches wherein automatically providing the at least one other computing device with access to the at least some of the secured information further comprises: accessing, by the at least one computing device, the at least one token; and providing, to the at least one other computing device, the at least one token (e.g., see [0023], [0053], [0056], [0075] wherein the system accesses and provides keys to authorized parties to at least some of the data).
As to claim 3, the rejection of claim 1 is incorporated. While Campbell teaches the system can be implemented in a VR/AR environment (e.g., see [0080]), Campbell fails to teach providing, the virtual representation of the person, wherein the request to access the at least some of the secured information is received by the virtual representation of the person.
However, in the same field of endeavor of data access, Hastings teaches providing, by the at least one computing device via a VR/AR environment, the virtual representation of the person, wherein the request to access that at least some of the secured information is received by the virtual representation of the person (e.g., see Fig. 12B, [0130] wherein the request to access/provide data is through the displayed avatars).
Accordingly, it would have been obvious to modify Campbell in view of Pandit with a reasonable expectation of success. One would have been motivated to make the modification in order allow for auditory and video feedback, thereby improving user experience and user engagement (e.g., see [0002] of Pandit).
As to claim 4, the rejection of claim 3 is incorporated. Campbell fails to tach wherein the request to access the at least some of the secured information is received by the virtual representation of the person from a virtual representation of another person.
However, in the same field of endeavor of data access, Pandit teaches wherein the request to access the at least some of the secured information is received by the virtual representation of the person from a virtual representation of another person (e.g., see Fig. 1C, [0031]-[0034] wherein a virtual avatar is granted or denied access to user data).
Accordingly, it would have been obvious before the effective filing date of the claimed invention to modify Campbell in view of Hastings with a reasonable expectation of success. One would have been motivated to make the modification to provide for a more engaging user environment thus improving user experience.
As to claim 5, the rejection of claim 1 is incorporated. Campbell further teaches wherein the at least some of the secured information was previously received from a plurality of respective sources (e.g., see Figs. 1, 5, [0022], [0043], [0071], [0119] teaching a system for receiving user data from a plurality of different sources); and further wherein automatically providing the at least one other computing device with access to the at least some of the secured information further comprises: accessing, by the at least one computing device, the information previously received from the plurality of respective sources (e.g., see Figs. 1, 5, [0022], [0043], [0071], [0119] teaching a system for receiving user data from a plurality of different sources); aggregating, by the at least one computing device, the information collected from each of a plurality of respective sources (e.g., see Figs. 1, 5, [0022], [0043], [0071], [0119] wherein the user data retrieved from other third party sources are used to enrich (i.e., aggregate) the user data); and securing, by the at least one computing device, the aggregated information as a function of the at least one token (e.g., see [0023], [0025], [0032]-[0033], [0041], [0058] teaching encrypting the data via keys).
As to claims 11-15, the claims are directed towards the system implementing the method of claims 1-5 and are similarly rejected.
Claims 6-9 and 16-19 is/are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Campbell and Hastings, as applied above, and in further view of Brown et al. (USPPN: 2014/0337048; hereinafter Brown).
As to claim 6, the rejection of claim 1 is incorporated. While Campbell teaches monitoring user data and third party interactions with machine learning and artificial intelligence (e.g., see [0137]-[0139]), Campbell-Hastings fail to explicitly teach using interactions between the person and virtual representation of the person via the VR/AR environment over time, by the at least one computing device, for machine learning and artificial intelligence
However, in the same field of endeavor virtual assistants, Brown teaches using interactions between the person and virtual representation of the person via the VR/AR environment over time, by the at least one computing device, for machine learning and artificial intelligence (e.g., see [0089]-[0092] teaching the use of machine learning and AI in a user’s interaction with the virtual assistant).
Accordingly, it would have been obvious before the effective filing date of the invention to modify Campbell-Hastings in view of Brown with a reasonable expectation of success. One would have been motivated to make the modification to improve the way users interact with devices and increase user engagement (e.g., see [0007] of Brown).
As to claim 7, the rejection of claim 6 is incorporated. Campbell further teaches wherein the secured information includes information associated with the person's health (e.g., see [0022] wherein the user data includes health information).
Campbell-Hastings fail to teach recognizing, by the at least one computing device via the machine learning and artificial intelligence, at least one health condition associated with the user.
However, in the same field of endeavor virtual assistants, Brown teaches recognizing, by the at least one computing device via the machine learning and artificial intelligence, at least one health condition associated with the user (e.g., see [0079], [0078], [0089]-[0092], [0096] wherein the virtual assistant can further predict diagnosis of the patient using machine learning). Accordingly, it would have been obvious to modify Campbell-Hastings in view of Brown with a reasonable expectation of success. One would have been motivated to make the modification to improve the way users interact with devices and increase user engagement (e.g., see [0007] of Brown).
As to claim 8, the rejection of claim 6 is incorporated. Campbell further teaches wherein the secured information includes information associated with the person's health, and further comprising: monitoring, by the at least one computing device via the machine learning and artificial intelligence, aspects of the person's health (e.g., see [0022], [0137]-[0139] wherein user data is monitored by machine learning and AI, wherein the user data includes information regarding the person’s health).
As to claim 9, the rejection of claim 8 is incorporated. Campbell further teaches wherein the aspects of the person's health include at least one of blood pressure, skin acidity, body temperature, heartrate, pupil movement, pupil shape, eyelid movement, iris information, sleep information, and stress (e.g., see [0022] wherein user health data includes heartbeat, blood analysis through skin, etc.).
As to claims 16-19, the claims are directed towards the system implementing the method of claims 6-9 and are similarly rejected.
Claims 10 and 20 is/are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Campbell and Hastings, as applied above, and in further view of Badr et al. (USPPN: 2022/0374612; hereinafter Badr).
As to claim 10, the rejection of claim 1 is incorporated. Campbell fails to teach wherein the authorization received by the virtual representation of the person via the VR/AR environment is in spoken form from the person, and further comprising: converting, by the at least one computing device using voice recognition technology, the spoken form of authorization to a digital form.
However, in the same field of endeavor of data access, Badr teaches wherein the authorization received by the virtual representation of the person via the VR/AR environment is in spoken form from the person, and further comprising: converting, by the at least one computing device using voice recognition technology, the spoken form of authorization to a digital form (e.g., see Fig. 1, [0006], [0051]-[0056] teaching communicating to the virtual assistance via voice and using natural language processing).
Accordingly, it would have been obvious before the effective filing date of the invention to modify Campbell-Hastings in view of Badr with a reasonable expectation of success. One would have been motivated to make the modification in order to cause virtual assistants to proactively engage with their user based on another’s request thereby improving upon automating interactions between people (e.g., see [0005] of Badr).
As to claim 20, the claim is directed towards the system implementing the method of claim 10 and is similarly rejected.
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain.” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co. v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert. denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F.3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir. 2005); Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STELLA HIGGS whose telephone number is (571)270-5891. The examiner can normally be reached Monday-Friday: 9-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Choi can be reached at (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STELLA HIGGS/Primary Examiner, Art Unit 3681