Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/02/2025 has been entered.
Detailed Action
This is an Office Action for application 17/790,946 in response to arguments and amendments filed on 12/02/2025. Claims 1, 5, 6, 9-13, 15, 17, and 22-23 are currently amended. Claims 2, 3, and 14 are cancelled. Claims 1, 4-13, and 15-23 are pending and examined below.
Response to Arguments
Applicant’s arguments, see pgs. 8-9, filed 12/02/2025, with respect to the rejection(s) of claim(s) 1 under 35 USC § 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Katzer (US Pat. 9,191,620).
Claim Objections
Claim 5 is objected to because of the following informalities: amended claim 5 now contains two periods. Appropriate correction is required.
Claim 23 is objected to because of the following informalities: amended claim 23 appears to contain a misspelling of the word “biomechanical” . Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-13 and 15-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fredier (US Pub. 2016/0066864) in view of Hebrard (US Pub. 2018/0218793) and Katzer (US Pat. 9,191,620).
Regarding claim 1, Fredier teaches
A computing device for real-time electronic communication enhanced with visual indicators of metadata, said device comprising: a display interface; (Figs. 24-5; Par. [0011-2, 152] a first electronic device monitoring the real-time health of a user can notify a second user's phone with the user's condition (e.g. on the display))
at least one communication interface; (Figs. 24-5; Par. [0011-2] a first electronic device monitoring the health of a user can notify (i.e. communicate) a second user's phone with the user's condition)
at least one memory storing processor-executable instructions; and (Par. [0053] a processor and recordable medium are used to execute the method and system)
at least one processor in communication with said at least one memory, said at least one processor configured to execute said instructions to: receive, by way of said at least one communication interface, communication data encoding a communication from an individual with whom a user of said computing device is engaging in a real-time electronic communication session and metadata from a plurality of disparate sources, wherein said real-time electronic communication session comprises real-time communication between said individual and said user; (Fig. 25; Par. [0152, 161-2, 174] a fitness monitor can be used to gather real-time health information for use by the inference engine and can be based on data like locality (i.e. location metadata) among many others, and it can also alert a user’s device (i.e. a communication data) about another user’s behavior)
generate a visual representation of said metadata, said visual representation comprising a plurality of per-state visual indicators respectively corresponding to different ones of a location state, a social connectedness state, a biomechanics state, a mood state, and a health and wellness states of said individual; (Figs. 12-15, 22-3; Par. [0055, 59, 171-2] an exemplary user state monitoring system is used to notify the medical personnel relevant to the issue for the relevant person if tracking is set to active, the displayed (i.e. visually represented) notification can include information related to location and health)
receive, by way of said at least one communication interface, updated metadata, from at least one of said disparate sources; and (Figs. 26-8; Par. [0182-4] community members and their respective activities that warrant notifications (e.g. received due to updated metadata) are displayed)
Fredier does not explicitly teach
present, by way of said display interface, a user interface for conducting the real-time electronic communication session with said individual, the user interface concurrently displaying communication content of the session and said visual representation adjacent to the communication content;
for each per-state visual indicator, display the indicator as enabled only when sharing of the corresponding state with said user has been selected to active by said individual through a sharing control at the individual's device, and display the indicator as visually disabled when sharing of the corresponding state is inactive;
and update said user interface while the session is ongoing, to reflect said updated metadata without leaving said user interface.
However, from the same field Hebrard further teaches
present, by way of said display interface, a user interface for conducting the real-time electronic communication session with said individual, the user interface concurrently displaying communication content of the session and said visual representation adjacent to the communication content; (Par. [0055] a 3D human anatomical model (i.e. visual representation) is displayed along with annotated with patient’s symptoms (i.e. communication content displayed adjacently to the visual representation))
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the interactive 3D model of Hebrard into the visual representation of the user in Fredier. The motivation for this combination would have been to improve diagnostic accuracy and treatment outcomes while saving on healthcare costs as explained in Hebrard (Par. [0060]).
The combination of Fredier and Hebrard do not explicitly teach
for each per-state visual indicator, display the indicator as enabled only when sharing of the corresponding state with said user has been selected to active by said individual through a sharing control at the individual's device, and display the indicator as visually disabled when sharing of the corresponding state is inactive;
and update said user interface while the session is ongoing, to reflect said updated metadata without leaving said user interface.
However, from the same field, Kantzer teaches
for each per-state visual indicator, display the indicator as enabled only when sharing of the corresponding state with said user has been selected to active by said individual through a sharing control at the individual's device, and display the indicator as visually disabled when sharing of the corresponding state is inactive; (Col. 6 [Lines 13-32] each modification of the user’s avatar can be turned on/off (i.e. active or inactive) can be selectively shared with different types of users depending on the user’s privacy settings)
and update said user interface while the session is ongoing, to reflect said updated metadata without leaving said user interface. (Col. 1 [Lines 31-42] the graphical representation is in a living environment (i.e. while the session is going and without leaving the interface))
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the avatar modifications of Kantzer into the visual representation of the user in Fredier. The motivation for this combination would have been to promote an increased sense of connectedness during a call as explained in Kanzter (Col. 2 [Lines 59-61]).
Regarding claim 4, Fredier, Hebrard and Katzer teach claim 1 as shown above, and Fredier further teaches
The computing device of claim 1, wherein said real-time electronic communication comprises at least one of audio communication, video communication, AR communication, or VR communication. (Par. [0124] comments on activities include videos)
Regarding claim 5, Fredier, Hebrard and Katzer teach claim 1 as shown above, and Fredier further teaches
The computing device of claim 1, wherein said plurality of states comprise is at least a first state and a second state, wherein said visual representation selectively presents a first visual indicator of said first state when sharing of said first state with said user is selected to active, and wherein said visual representation selectively presents a second visual indicator of said second state when sharing of said second state with said user is selected to active. (Fig. 25; Par. [0059, 161, 174] a fitness monitor can be used to gather health information for use by the inference engine when tracking is set to active and can be based on data like locality)
Katzer further teaches
and wherein visual indicators corresponding to states for which sharing is inactive are rendered as visually disabled. (Col. 6 [Lines 13-32] each modification of the user’s avatar can be turned on/off (i.e. active or inactive) can be selectively shared with different types of users depending on the user’s privacy settings)
Regarding claim 6, Fredier, Hebrard and Katzer teach claim 5 as shown above, and Fredier further teaches
The computing device of claim 1, wherein said received metadata omits and said visual representation omit and visually disable the indicator for said states when sharing of said states with said user is selected to inactive. (Fig. 25; Par. [0059, 161, 174] a fitness monitor can be used to gather health information for use by the inference engine unless the user deactivates (i.e. inactivates) self-tracking)
Regarding claim 7, Fredier, Hebrard and Katzer teach claim 1 as shown above, and Fredier further teaches
The computing device of claim 1, wherein said visual representation includes a visual representation of the individual. (Fig. 26 #202; Par. [0182] GUI contains a space for a community member profile picture)
Regarding claim 8, Fredier, Hebrard and Katzer teach claim 1 as shown above, and Fredier further teaches
The computing device of claim 1, wherein said visual representation includes a static visual representation of the individual. (Fig. 26 #202; Par. [0182] GUI contains a space for a community member profile picture (e.g. static))
Regarding claim 9, Fredier, Hebrard and Katzer teach claim 1 as shown above, and Hebrard further teaches
The computing device of claim 1, wherein said visual representation includes a dynamic visual representation of the individual presented within the same user interface for the real-time electronic communication session. (Par. [0130-2] an interactive 3D anatomical model of a patient (i.e. dynamic representation of the individual) is depicted)
Regarding claim 10, Fredier, Hebrard and Katzer teach claim 9 as shown above, and Hebrard further teaches
The computing device of claim 9, wherein said dynamic visual representation of the individual comprises an animated avatar of the individual rendered as part of the visual representation and updated during the session. (Par. [0130-2] an interactive 3D anatomical model of a patient is depicted and can illustrate bleeding (i.e. animation), blue limbs for coldness, etc.)
Regarding claim 11, Fredier and Hebrard teach claim 10 as shown above, and Hebrard further teaches
The computing device of claim 10, wherein said animated avatar is animated to reflect said updated metadata only for states for which sharing with said user is active, and omits animation for states for which sharing is inactive. (Par. [0130-2] after being confirmed (i.e. updated) by the patient provider's in-person findings, they are illustrated on the avatar)
Regarding claim 12, Fredier, Hebrard and Katzer teach claim teach claim 9 as shown above, and Fredier further teaches
The computing device of claim 9, wherein said dynamic visual representation of the individual comprises a video of the individual concurrently displayed with said communication content and said per-state visual indicators. (Par. [0123-4] the user's location and activities are used as metadata associated with user's video (i.e. a video of the individual))
Regarding claim 13, Fredier, Hebrard and Kantzer teach claim 1 as shown above, and Kantzer further teaches
The computing device of claim 12, wherein said video comprises live video data and the per-state visual indicators are updated during the live video communication without leaving the user interface. (Col. 1 [Lines 31-42] the graphical representation is in a living environment (i.e. while the session is going and without leaving the interface))
Regarding claim 15, Fredier, Hebrard and Katzer teach claim 8 as shown above, and Fredier further teaches
The computing device of claim 8, wherein said visual representation of the individual comprises audio data of the individual accessible through a per-state audio indicator within the visual representation, the indicator being enabled only when the individual's sharing setting for audio is active. (Col. 6 [Lines 13-32] each modification of the user’s avatar can be turned on/off (i.e. active or inactive) can be selectively shared with different types of users depending on the user’s privacy settings)
Regarding claim 16, Fredier, Hebrard and Katzer teach claim 1 as shown above, and Fredier further teaches
The computing device of claim 1, wherein the at least one processor is configured to execute said instructions to: change, upon an interaction by the user, a detail level of the visual representation of the metadata. (Figs. 26, 29; Par. [0184] the user presses (i.e. interacts) the right arrow on a particular community member (#202) and a more detailed (i.e. detail level) community member information is displayed (i.e. visually represented; #220))
Regarding claim 17, the language is slightly different, but is rejected under the same rationale as claim 1.
Regarding claim 18, Fredier, Hebrard and Katzer teach claim 17 as shown above, and Fredier further teaches
The computer-implemented method of claim 17, further comprising sending communication data reflective of the real-time electronic communication to the individual. (Par. [0015-6] the message containing information about activity patterns can be delivered to the user)
Regarding claim 19, the language is slightly different, but is rejected under the same rationale as claim 4.
Regarding claim 20, Fredier, Hebrard and Katzer teach claim 17 as shown above, and Fredier further teaches
The computer-implemented method of any one of claims 17 to 19, further comprising receiving communication data reflective of the electronic communication from the individual. (Par. [0059] a user can select whether to activate or deactivate self and community tracking information)
Regarding claim 21, the language is slightly different, but is rejected under the same rationale as claim 16.
Regarding claim 22, the language is slightly different, but is rejected under the same rationale as claim 1.
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fredier (US Pub. 2016/0066864) in view of Hebrard (US Pub. 2018/0218793) and Kantzer et al. (US Pat. 9,161,620), and further in view of Bender et al. (2020/0090132).
Regarding claim 23, Fredier, Hebrard and Kantzer teach claim 1 as shown above, but do not explicitly teach
The computing device of claim 1, wherein at least one of the plurality of disparate sources comprises a smart garment worn by the individual, the smart garment comprising a knitted textile having a plurality of conductive fibres interlaced with non-conductive fibres to form integrated signal paths to sensors configured to sense at least one of ECG, EMG, or body temperature, and wherein the per-state visual indicators include indicators of electrophysiological signals and bioechanical feedback obtained from the smart garment.
However, from the same field Bender teaches
The computing device of claim 1, wherein at least one of the plurality of disparate sources comprises a smart garment worn by the individual, the smart garment comprising a knitted textile having a plurality of conductive fibres interlaced with non-conductive fibres to form integrated signal paths to sensors configured to sense at least one of ECG, EMG, or body temperature, and wherein the per-state visual indicators include indicators of electrophysiological signals and bioechanical feedback obtained from the smart garment. (Par. [0033] IoT device includes wearable devices, including smart clothes (i.e. smart garment with plurality of conductive fibers))
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the smart garment of Bender into the data sources of Fredier. The motivation for this combination would have been to improve the efficiency in scheduling events for the user of the client devices as explained in Bender (Par. [0011]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to J MITCHELL CURRAN whose telephone number is (469)295-9081. The examiner can normally be reached M-F 8:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached on (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J MITCHELL CURRAN/Examiner, Art Unit 2161
/SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169