DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are presented for examination.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mason et al. U.S. Patent Application Publication Number 2021/0142898 A1 (hereinafter Mason), and further in view of Du U.S. Patent Application Publication Number 2021/0248789 A1 (hereinafter Du).
Claims 1, 11, 20, Mason discloses a computer-implemented method, comprising:
receiving a request, via a communication network, to conduct a communication session between one or more user devices (see requesting user for virtual conference between user and health provider on page 4 section [0041] and see telemedicine session between individual and treatment devices on page 1 section [0003]), wherein a first user device of the one or more user devices is a healthcare provider (see requesting a user of virtual conference with identified healthcare providers on page 4 section [0041] and see telemedicine session between devices of healthcare professional and individual on page 1 section [0003]);
receiving user profile data, wherein the user profile data pertains to at least one of the one or more user devices of the communication session (see health care provider device receiving user medical records of the individual, or user profile data as claimed, on page 4 section [0042]);
detecting communications between the one or more user devices over a duration of the communication session (to be taught by Du);
generating, in real time, an object using a machine-learning model, wherein the object is generated based on at least the user profile data and the detected communications (to be taught by Du); and
presenting the object to at least one of the one or more user devices, wherein the presentation occurs over the duration of the communication session (to be taught by Du).
Mason do not disclose expressly: detecting communications between the one or more user devices over a duration of the communication session;
generating, in real time, an object using a machine-learning model, wherein the object is generated based on at least the user profile data and the detected communications; and
presenting the object to at least one of the one or more user devices, wherein the presentation occurs over the duration of the communication session.
Du teaches: detecting communications between the one or more user devices over a duration of the communication session (see using Artificial Intelligence to analyze live user interactions on page 2 section [0009] and see analyze user response to presented content on page 9 section [0078]);
generating, in real time (see generating 2D/3D models in real time using artificial intelligence on page 5 section [0050]), an object using a machine-learning model (see use of artificial intelligence or machine learning to learn analytics results from user interaction on page 3-4 section [0024]), wherein the object is generated based on at least the user profile data and the detected communications (see providing interactive object and additional object content based on live user interaction on page 5 section [0049]); and
presenting the object to at least one of the one or more user devices, wherein the presentation occurs over the duration of the communication session (see presenting realtime content and overlay information to users in form of 2D/3D models, or objects as claimed, on page 5 section [0050]).
Mason and Du are analogous art because they are from the same field of endeavor. Interactive conferencing systems with machine learning. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use generate object in real time using machine learning during a communication session. The motivation for doing so would have been to enhance communication using realistic live user interactive models (see page 5 section [0049] in Du). Therefore, it would have been obvious to combine Mason and Du for the benefit of generating and presenting real time objects using machine learning to obtain the invention as specified in claims 1, 11, 20.
As per claims 2, 12, Mason and Du disclose the computer-implemented method of claim 1, wherein the object is encompassed in a primary interactive object that is currently presented to at least one of the one or more user devices, whereby the object does not visually display additional content without interaction via the communication network (see interactive object that responds to user movement such as rotation to present interactive object in virtual reality environment on page 5 section [0049] and see live user interaction cause presentation of additional content such as with a book or a teacher hologram on page 8 section [0073] in Du). The motivation to combine is same as above.
As per claims 3, 13, Mason and Du disclose the computer-implemented method of claim 1, wherein the object encompasses the entirety of a visual virtual environment on a user device of the one or more user devices (see presenting virtual objects in a virtual reality environment to provide live user interactions on page 5 section [0049] n Du), and wherein the object contains one or more interactive elements (see live user interaction cause presentation of additional content such as with a book or a teacher hologram on page 8 section [0073] in Du). The motivation to combine is same as above.
As per claims 4, 14, Mason and Du disclose the computer-implemented method of claim 1, wherein the object includes a visual transcription of an audio channel of the communication session (see analysis of textual and audio signals during interaction sessions on page 4 section [0025] in Du). The motivation to combine is same as above.
As per claims 5, 15, Mason and Du disclose the computer-implemented method of claim 1, wherein the machine-learning model generates the object based on prior communication sessions between at least one of the one or more user devices (see machine learning models to learn analytics result information from interaction session on page 4 section [0024] and see artificial intelligence learn data analytics information from user and from training data collected over time on page 7 section [0061] in Du). The motivation to combine is same as above.
As per claims 6, 16, Mason and Du disclose the computer-implemented method of claim 1, wherein the object incorporates data from the communication session and/or prior communication sessions to generate updated clinical guidelines (see using trained machine learning model to update appropriate treatment plan from previously trained data on page 14 section [0138] and see health care provide may interact and modify treatment plan on page 4 section [0044] in Mason). The motivation to combine is same as above.
As per claims 7, 17, Mason and Du disclose the computer-implemented method of claim 1, wherein the communication session includes an audio channel and a video channel (see live interactive communication session including audio and video on page 4 section [0024] and see providing realtime overlay for video and voice communication on page 5 section [0050] in Du). The motivation to combine is same as above.
As per claims 8, 18, Mason and Du disclose the computer-implemented method of claim 1, wherein a video channel associated with the communication session is configured to present a collaborative window in place of a representation of a user of a user device (see user collaboration session on page 4 section [0028] in Du and see overview display, or collaborative window as claimed, displaying information in assisting he patient on page 12 section [0113] and see collaborative browsing to view patient interface on page 12 section [0119] in Mason). The motivation to combine is same as above.
As per claims 9, 19, Mason and Du disclose the computer-implemented method of claim 1, wherein a recording of the communication session contains consolidated video content (see viewing real time AR interactive content with overlay information, or consolidated video content as claimed, on page 5 section [0050] in Du) from the one or more user devices (see collaborative browsing to share viewing patient interface on devices, or consolidated video content as claimed, on page 12 section [0119] in Mason). The motivation to combine is same as above.
As per claim 10, Mason and Du disclose the computer-implemented method of claim 1, wherein the user profile data is generated by at least one of a machine-learning algorithm, prior communication sessions, and data gathered prior to the communication session (see machine learning uses user training data collected over time on page 7 section [0061] in Du). The motivation to combine is same as above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Mason et al. U.S. Patent Application Publication Number 2025/0182888 A1. Telemedicine session for patient and healthcare providers (see Abstract).
Lim U.S. Patent Application Publication Number 2025/0378948 A1. AI-based personalization medical information to provide personalized healthcare content for users.
Cahalin et al. U.S. Patent Application Publication Number 2022/0180293 A1. Real time connection of service seekers with online service providers to establish a live media session (see Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN S CHOU whose telephone number is (571)272-5779. The examiner can normally be reached Monday-Friday 9:00-5:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris L Parry can be reached at (571)272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALAN S CHOU/Primary Examiner, Art Unit 2451