Detailed Action
This communication is in response to the Application filed on 2/12/2024.
Claims 1-20 are pending and have been examined.
Independent Claims 1, 11 are device and method claims, respectively.
Apparent priority: 9/24/2021.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 2/12/2024 have been considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent claim 1 recites,
“1. A computing device comprising: a computer-readable medium storing instructions for digital twin behavioral modeling; and a processor configured to execute the instructions to cause a system including at least a multimodal dialog manager and a digital twin platform to:
receive one or more multimodal queries or conversations; (This relates to a human using auditory processing to receive conversations)
parse the multimodal queries or conversations for content; (This relates to a human using natural language processing to parse conversations for content in the human mind.)
recognize and sense one or more multimodal content from the parsed content; (This relates to a human using natural language understanding to recognize and sense content in the human mind.)
train the multimodal dialog manager and a virtual human, wherein to train, the processor is further configured to execute instructions to: (This relates to a human using pen and paper to execute instructions.)
process social simulations which model behavior of humans and virtual humans, wherein each pair of a human and a virtual human form digital twins; (This relates to a human using natural language understanding and awareness to process social situations.)
and transfer knowledge through the social simulations to the virtual humans, wherein social and functional behavior of humans is transferred to the virtual humans via the digital twin platform to generate a learned digital twin behavior model; (This relates to a human using natural learning to transfer knowledge through speech or pen and paper.)
and generate responses to the multimodal queries or conversations based on the learned digital twin behavior model. (This relates to a human using pen and paper to generate a response)
The Dependent Claims do not include additional limitations that could incorporate the abstract idea into a practical application or cause the Claim as a whole to amount to significantly more than the underlying abstract idea.
This judicial exception is not integrated into a practical application. In particular, claim 1 recites the additional elements of “processor”. For example, in [0035] of the as filed specification, there is description of using The computing device 100 includes at least one processor 102. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a computer noted as a general computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible.
Regarding Independent Claim 11, claim 11 is a method claim with limitations similar to that of Claim 1 and is rejected under the same rational. no additional elements.
Dependent claim 2 recites,
“2. The computing device of claim 1, wherein the social and functional behavior include verbal and non-verbal behavioral patterns. (This relates to a human using contextual awareness and learning to include verbal and non-verbal patterns.) no additional elements.
Dependent claim 3 recites,
“3. The computing device of claim 1, wherein quantum learning and quantum transfer is used to transfer the knowledge among the digital twins. (This relates to a human using speech or pen and paper to transfer knowledge.) no additional elements.
Dependent claim 4 recites,
“4. The computing device of claim 3, wherein quantum teleportation and quantum entanglement is used to transfer conversational control states among the digital twins. (This relates to a human using speech or pen and paper to transfer states.) no additional elements.
Dependent claim 5 recites,
“5. The computing device of claim 4, wherein the quantum teleportation transfers a conversational control state to one of the human or the virtual human without communicating a control change to the human or the virtual human currently having the conversational control state. (This relates to a human using speech or pen and paper to transfer control state.) no additional elements.
Dependent claim 6 recites,
“6. The computing device of claim 5, wherein the quantum entanglement describes quantum states of the human and the virtual human with reference to each other. (This relates to a human having awareness in the human mind.) no additional elements.
Dependent claim 7 recites,
“7. The computing device of claim 6, wherein the human and the virtual human exist in a superposition quantum state. (This relates to a human existing.) no additional elements.
Dependent claim 8 recites,
“8. The computing device of claim 7, wherein the human and the virtual human can switch between quantum states to capture digital twin behavioral patterns represented in the learned digital twin behavior model. (this relates to a human applying logic and reasoning to capture behavior patterns.) no additional elements.
Dependent claim 9 recites,
“9. The computing device of claim 4, wherein the digital twin platform is a multi-layer quantum framework for performing the quantum teleportation and the quantum entanglement as between the digital twins. (This relates to a human existing) no additional elements.
Dependent claim 10 recites,
“10. The computing device of claim 3, wherein quantum information difference is minimized between data points for the human and data points for the virtual human in quantum space so that the virtual human behaves substantially the same as the human. (This relates to a human behaving the same as another) no additional elements.
As to claim 12, claim 12 is a parallel method claim with limitations similar to that of claim 2 and is rejected under the same rationale.
As to claim 13, claim 13 is a parallel method claim with limitations similar to that of claim 3 and is rejected under the same rationale.
As to claim 14, claim 14 is a parallel method claim with limitations similar to that of claim 4 and is rejected under the same rationale.
As to claim 15, claim 15 is a parallel method claim with limitations similar to that of claim 5 and is rejected under the same rationale.
As to claim 16, claim 16 is a parallel method claim with limitations similar to that of claim 6 and is rejected under the same rationale.
As to claim 17, claim 17 is a parallel method claim with limitations similar to that of claim 7 and is rejected under the same rationale.
As to claim 18, claim 18 is a parallel method claim with limitations similar to that of claim 8 and is rejected under the same rationale.
As to claim 19, claim 19 is a parallel method claim with limitations similar to that of claim 9 and is rejected under the same rationale.
As to claim 20, claim 20 is a parallel method claim with limitations similar to that of claim 10 and is rejected under the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Leeds (U.S. Patent Number US 11431660 B1), in view of Lok (U.S. Patent Number US 20120139828 A1), and further in view of Hayashida (U.S. Patent Number US 20180120928 A1).
Regarding independent Claim 1, Leeds teaches
process social simulations which model behavior of humans and virtual humans, wherein each pair of a human and a virtual human form digital twins; and transfer knowledge through the social simulations to the virtual humans, (see Leeds “(55:60-56:67) “(241… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. Results may be nondeterministic except as statistical norms, based on probabilistic states of quantum devices. Cooperative behavior for security and defense/offense are possible applications. Layers of security can be implemented within the architecture, e.g., the layering of forums. The open platform aspects may fit well with zero-trust architectures. Animal defense and swarm cooperation to avoid, confuse or distract other entities via cooperation, for example using cooperating submind applications on mobile devices (such as mobile phones and digital assistants) to emit the sounds of moving and barking animals, or high and low frequency tones with beat frequencies, for cumulative dispersed effects, or to respond with other organizing directions for humans and devices. Team marketing, e.g., feature and price testing (by buyer and seller) may be addressable by the present invention, particularly in the enlistment and teaming of existing marketing AI chatbots. Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
Leeds does not specifically teach 1. A computing device comprising: a computer-readable medium storing instructions for digital twin behavioral modeling; and a processor configured to execute the instructions to cause a system including at least a multimodal dialog manager and a digital twin platform to: However, Lok does teach this limitation (see Lok [0074] As described above, the embodiments of the invention may be embodied in the form of hardware, software, firmware, or any processes and/or apparatuses for practicing the embodiments. Embodiments of the invention may also be embodied in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits.”) wherein social and functional behavior of humans is transferred to the virtual humans via the digital twin platform to generate a learned digital twin behavior model; (see Lok [0047] To improve skills education, H-VH interactions with AAR are augmented. AAR enables students to review their H-VH interaction to evaluate their actions, and receive feedback on how to improve future real-world experiences. AAR for H-VH interactions incorporates three design principles: 1. An H-VH interaction is composed of social, temporal, and spatial characteristics. These characteristics will be explored by students via AAR visualizations. 2. An H-VH interaction is a set of signals. Interaction signals are captured, logged, and processed to produce visualizations. 3. An H-VH interaction is complex. Students gain insight into this complexity by reviewing multiple visualizations, such as audio, video, text, and graphs to enable AAR, IPSViz processes the signals characterizing an HVH interaction to provide an array of visualizations. The visualizations are used to facilitate interpersonal skills education. Novel visualizations can be produced by leveraging the many signals that are captured in an H-VH interaction. Given an H-VH interaction, AAR is facilitated through the following visualization types: the H-VH interaction can be 3D rendered from any perspective, including that of the conversation partner (the virtual camera is located at the VH's eyes). These are called "spatial visualizations." Students are able to perceive "what it was like to talk to themselves"; events in the H-VH interaction are visualized with respect to an interaction timeline. These are called "temporal visualizations." Students are able to discern the relationship between conversation events; Verbal and nonverbal behaviors are presented in log, graph, and 3D formats. These are called "social visualizations." Students are able to understand how their behavior affects the conversation.”) train the multimodal dialog manager and a virtual human, wherein to train, the processor is further configured to execute instructions to: (see Lok [0032] “The creation of VHs for practicing interview skills is logistically difficult and time consuming. The logistical hurdles involve the efficient acquisition of knowledge for the conversational model; specifically, the portion of the model that enables a VH to respond to user speech. Acquiring this knowledge has been a problem because it required extensive VH developer time to program the conversational model. Embodiments of the invention include a method for implementing a Virtual People Factory.”) and generate responses to the multimodal queries or conversations based on the learned digital twin behavior model. (see Lok [0021] “Users are able to interact with the MRH through a combination of verbal, gestural, and haptic communication techniques. The user communicates verbally with the MRH patient using natural speech. Wireless microphone 104 transmits the user's speech to the simulation system 112, which performs speech recognition. Recognized speech is matched to a database of question-answer pairs using a keyword based approach. The database for a scenario consists of 100-300 question responses paired with 1000-3000 questions. The many syntactical ways of expressing a question are handled by the keyword-based approach and a list of common synonyms. The MRH responds to matched user speech with speech pre-recorded by a human patient through the HMD 102.”)
Leeds and Lok are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the processing of social simulations which model behavior of humans and virtual humans, wherein each pair of a human and a virtual human form digital twins; and transfer knowledge through the social simulations to the virtual humans of Leeds to incorporate the teachings of Lok to include A computing device comprising: a computer-readable medium storing instructions for digital twin behavioral modeling; and a processor configured to execute the instructions to cause a system including at least a multimodal dialog manager and a digital twin platform to: wherein social and functional behavior of humans is transferred to the virtual humans via the digital twin platform to generate a learned digital twin behavior model; train the multimodal dialog manager and a virtual human, wherein to train, the processor is further configured to execute instructions to: and generate responses to the multimodal queries or conversations based on the learned digital twin behavior model. This allows users to model virtual conversations as recognized by Lok [0038].
Leeds in view of Lok does not specifically teach receive one or more multimodal queries or conversations; parse the multimodal queries or conversations for content; recognize and sense one or more multimodal content from the parsed content; However, Hayashida does teach this limitation (see Hayashida [0073] “The action instructing unit 125 monitors the non-verbal behavior of the communication partner user using the log data used in generating the image of the user avatar. In addition, the action instructing unit 125 determines whether or not the communication partner user has performed a particular non-verbal behavior based on a result of the monitoring. Further, when the action instructing unit 125 determines that the communication partner user has performed a particular non-verbal behavior, the action instructing unit 125 determines an appropriate image of the machine avatar for bringing the user into a post desirable-change user state, and gives an instruction to the machine avatar information display processing unit 121.”)
Leeds in view of Lok and Hayashida are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of combination Leeds and Lok to incorporate receive one or more multimodal queries or conversations; parse the multimodal queries or conversations for content; recognize and sense one or more multimodal content from the parsed content. This allows improved determination accuracy as recognized by Hayashida [0407].
As to Independent Claim 11, Claim 11 is a parallel method claim with limitations similar to that of claim 1 and is rejected under the same rationale.
As to Claim 2, Leeds in view of Lok and further in view of Hayashida teaches 2. The computing device of claim 1,
Furthermore, Hayashida teaches wherein the social and functional behavior include verbal and non-verbal behavioral patterns. (see Hayashida [0073] “The action instructing unit 125 monitors the non-verbal behavior of the communication partner user using the log data used in generating the image of the user avatar. In addition, the action instructing unit 125 determines whether or not the communication partner user has performed a particular non-verbal behavior based on a result of the monitoring. Further, when the action instructing unit 125 determines that the communication partner user has performed a particular non-verbal behavior, the action instructing unit 125 determines an appropriate image of the machine avatar for bringing the user into a post desirable-change user state, and gives an instruction to the machine avatar information display processing unit 121.”)
Leeds in view of Lok and further in view of Hayashida are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of combination of Leeds and Lok and Hayashida to incorporate wherein the social and functional behavior include verbal and non-verbal behavioral patterns of Hayashida. This allows improved determination accuracy as recognized by Hayashida [0407].
As to Claim 3, Leeds in view of Lok and further in view of Hayashida teaches 3. The computing device of claim 1,
Furthermore, Leeds teaches wherein quantum learning and quantum transfer is used to transfer the knowledge among the digital twins. (see Leeds “(55:60-56:67) “(… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. … Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
As to Claim 4, Leeds in view of Lok and further in view of Hayashida teaches 4. The computing device of claim 3,
Furthermore, Leeds teaches wherein quantum teleportation and quantum entanglement is used to transfer conversational control states among the digital twins. (see Leeds “(55:60-56:67) “(… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. … Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
As to Claim 5, Leeds in view of Lok and further in view of Hayashida teaches 5. The computing device of claim 4,
Furthermore, Leeds teaches wherein the quantum teleportation transfers a conversational control state to one of the human or the virtual human without communicating a control change to the human or the virtual human currently having the conversational control state. (see Leeds “(55:60-56:67) “(… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. … Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
As to Claim 6, Leeds in view of Lok and further in view of Hayashida teaches 6. The computing device of claim 5,
Furthermore, Leeds teaches wherein the quantum entanglement describes quantum states of the human and the virtual human with reference to each other. (see Leeds “(55:60-56:67) “(… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. … Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
As to Claim 7, Leeds in view of Lok and further in view of Hayashida teaches 7. The computing device of claim 6,
Furthermore, Leeds teaches, wherein the human and the virtual human exist in a superposition quantum state. (see Leeds “(55:60-56:67) “(… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. … Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
As to Claim 8, Leeds in view of Lok and further in view of Hayashida teaches 8. The computing device of claim 7,
Furthermore, Leeds teaches, wherein the human and the virtual human can switch between quantum states to capture digital twin behavioral patterns represented in the learned digital twin behavior model. (see Leeds “(55:60-56:67) “(… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. … Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
As to Claim 9, Leeds in view of Lok and further in view of Hayashida teaches 9. The computing device of claim 4,
Furthermore, Leeds teaches, wherein the digital twin platform is a multi-layer quantum framework for performing the quantum teleportation and the quantum entanglement as between the digital twins. (see Leeds “(55:60-56:67) “(… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. … Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
As to Claim 10, Leeds in view of Lok and further in view of Hayashida teaches 10. The computing device of claim 3,
Furthermore, Leeds teaches wherein quantum information difference is minimized between data points for the human and data points for the virtual human in quantum space so that the virtual human behaves substantially the same as the human. (see Leeds “(55:60-56:67) “(… The present invention can be used to coordinate goal-oriented processes managed by facilitators and implemented by collaborative subminds. Quantum level challenges of NLU implementation, submind coordination and human language comprehensibility may be addressed by the present invention, as such computing resources become more readily available, and intelligent biological systems mechanisms are further elucidated. Quantum computing and effects utilized by subminds may include probabilistic calculations, particle entanglement and calculation with matrices. For example, all possible interpretations of a conversation segment in context may exist as probabilities (for example metaphor, allusion, sarcasm, off subject, in error and falsehood) until the time of chosen interpretation by the recipient, or qualified conversationally, or confirmed by action by the speaker. … Subminds with specific product or service knowledge may be added to a conversation, without the initial submind team member leaving, enabling improved sales opportunities and customer service engagement. Collaborative confusion where the purpose of the collaboration is to mislead, distract, or confuse, possibly through purposeful ambiguity. Entertainment animation, including interactive immersive AR/VR, recorded, modeled, or historical, or combined, will find the present invention of use. Lip synchronization during animated speech is a challenging area that requires balance among the various factors and ways to evaluate them, which the present invention could excel at. Real time flexible immersive simulation will be possible using subminds for efficient component simulations.”)
As to claim 12, claim 12 is a parallel method claim with limitations similar to that of claim 2 and is rejected under the same rationale.
As to claim 13, claim 13 is a parallel method claim with limitations similar to that of claim 3 and is rejected under the same rationale.
As to claim 14, claim 14 is a parallel method claim with limitations similar to that of claim 4 and is rejected under the same rationale.
As to claim 15, claim 15 is a parallel method claim with limitations similar to that of claim 5 and is rejected under the same rationale.
As to claim 16, claim 16 is a parallel method claim with limitations similar to that of claim 6 and is rejected under the same rationale.
As to claim 17, claim 17 is a parallel method claim with limitations similar to that of claim 7 and is rejected under the same rationale.
As to claim 18, claim 18 is a parallel method claim with limitations similar to that of claim 8 and is rejected under the same rationale.
As to claim 19, claim 19 is a parallel method claim with limitations similar to that of claim 9 and is rejected under the same rationale.
As to claim 20, claim 20 is a parallel method claim with limitations similar to that of claim 10 and is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KRISTEN MICHELLE MASTERS whose telephone number is (703)756-1274. The examiner can normally be reached M-F 8:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KRISTEN MICHELLE MASTERS/Examiner, Art Unit 2659
/PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659