DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/26/2025 has been entered.
Response to Amendment
Claims 1-9 are rejected under 35 U.S.C. § 101 because the claimed subject matter is directed to a judicial exception (an abstract idea). The limitations of claim 1 recite obtaining data, determining, generating and presentation of results — concepts that fall within the judicial exceptions of “mental processes” and “mathematical concepts.
Applicant’s amendment regarding claims 1, 10, and 18 obviates 35 U.S.C. 112(b) claim rejection, therefore the claim rejection is withdrawn.
However, the newly amended Claims 1, 10, and 18 recite the limitation “rendering in the metaverse environment and in a proximity of an avatar associated with the user” and are rejected under 35 U.S.C. 112(b) second paragraph, as being indefinite.
Applicant’s amendment regarding claim 22 obviates 35 U.S.C. 112(a) claim rejection, therefore the claim rejection is withdrawn.
However, the newly amended Claims 1, 10, and 18 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement.
Status of Claims
The Amendment filed on November 26, 2025 has been entered. Claims 1, 10, 18 and 22 were amended. Claims 16, 17, and 21 were canceled. Claim 23 was newly added. Claims As a result, claims 1-15, and 18-20, 22-23 are pending, of which claims 1, 10, and 18 are in independent form.
Response to Arguments
With respect to the 112(a) Applicant argues “in the MPEP § 2161”… “The Office Action does not address factors identified in Capon: (a) the existing knowledge in the field, (b) the extent and content of the prior art, (c) the maturity of the science or technology, and (d) the predictability of the aspect at issue”.
It appears the Applicant has piecemeal this section of the MPEP, where it further states “
For computer-implemented inventions, the determination of the sufficiency of disclosure will require an inquiry into the sufficiency of both the disclosed hardware and the disclosed software due to the interrelationship and interdependence of computer hardware and software. The critical inquiry is whether the disclosure of the application relied upon reasonably conveys to those skilled in the art that the inventor had possession of the claimed subject matter as of the filing date. Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 682. 114 USPQ2d 1349, 1356 (citing Ariad Pharm., Inc. V. Eli Lilly & Co, 598 F.3d 1336, 1351, 94 USPQ2d 1161, 1172 (Fed. Cir. 2010) in the context of determining possession of a claimed means of accessing disparate databases).
Furthermore, “When examining computer-implemented functional claims, examiners should determine whether the specification discloses the computer and the algorithm (e.g., the necessary steps and/or flowcharts) that perform the claimed function in sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor possessed the claimed subject matter at the time of filing. An algorithm is defined, for example, as "a finite sequence of steps for solving a logical or mathematical problem or performing a task." Microsoft Computer Dictionary (5th ed., 2002). Applicant may "express that algorithm in any understandable terms including as a mathematical formula, in prose, or as a flow chart, or in any other manner that provides sufficient structure." Finisar Corp. v. DirecTV Grp., Inc., 523 F.3d 1323, 1340, 86 USPQ2d 1609, 1623 (Fed. Cir. 2008) (internal citation omitted). It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. “If the specification does not provide a disclosure of the computer and algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention a rejection under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, for lack of written description must be made.” Which is the case of this Application and the Applicant has not provided any evidence of any supporting disclosure in response to the rejection.
Contrary to Applicant’s arguments the office action clearly pointed the above facts, as the claims were rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The rejection is maintained below.
Regarding Applicant’s mere mention of the Capon factors and what appears to be the applicant’s reliance on general knowledge/choice of “various machine learning models.”, as noted above the rejection meets the prima facie burden under MPEP 2161 and 2161.01(I).
Written description asks whether the specification reasonably conveys to a POSITA that the inventor had possession of the claimed invention at the time of filing. For computer-implemented functional limitations, MPEP 2161.01(I) instructs that the specification should disclose sufficient structure or an algorithm describing how the function is performed. Merely stating “use machine learning” or “an ML model assigns weights and classifies” is not, by itself, sufficient to show possession of the recited functions (e.g., “determining whether the user is a human user or a software-based entity based on aggregating attribute information,” and “computing a confidence level based on profiling motion data and audio data”).
The Office Action identified specific missing elements material to the claimed functions: no disclosure of (i) the algorithmic steps or structures by which the interpreter aggregates and weights heterogeneous sensor streams. Identifying these missing elements is a sufficient “reasonable explanation” to establish a prima facie § 112(a) case under MPEP 2161. The burden then shifts to applicant to show where the specification provides that level of detail or equivalent structure/algorithm. In this instant response, instead the Applicant has opted to not provide such explanation.
Even addressing Capon’s factors does not overcome the deficiency here Applicant argues the Office did not analyze (a) knowledge in the field, (b) prior art, (c) maturity of the technology, and (d) predictability (Capon). Even considering those factors, the result is the same:
(a) Existing knowledge in the field: It is undisputed that ML techniques generally exist. But written description is about what this inventor possessed, not what a POSITA could later supply. See Ariad v. Lilly (WD is not satisfied by “mere wish or plan”); Lockwood (cannot rely on knowledge of a POSITA to supply missing description). Simply pointing to the existence of “various machine learning models” does not identify what model, features, training, fusion, or profiling this inventor possessed to perform the specific claimed functions.
(b) Extent and content of the prior art: Generic ML classifiers for signal classification are well known. That, however, underscores the breadth of the claim (any ML that profiles motion and audio to output a probability) and heightens the need for representative species or common structural features of the genus the applicant purports to possess. The specification does not disclose representative algorithms or a unifying set of features/steps characterizing the claimed genus of “profiling motion and audio to compute a confidence probability” in this metaverse context.
(C) Maturity of the technology: ML is a broad and evolving field with numerous architectures and fusion strategies. Even in a mature area, when the claim recites a functional result (robust human vs bot classification with a computed probability from motion and audio profiling), written description requires disclosure of how the function is achieved (e.g., representative algorithm(s) or sufficient structural/processing detail). See MPEP 2161.01(I).
(d) Predictability of the aspect at issue: The particular aspect at issue—accurately detecting human vs software control in real time from heterogeneous metaverse sensor streams and computing a calibrated probability—is not predictably achieved by any off‑the‑shelf ML model. Performance depends critically on feature engineering (e.g., IMU gait/latency signatures, audio prosody/ASR artifacts), synchronization of modalities, fusion strategy (early/late fusion, weighting), classifier architecture, and probability calibration. Because outcomes are not predictable without specifying these elements, more—not less—algorithmic detail is required to demonstrate possession of the claimed functions.
Accordingly, even when the Capon factors are considered, the current disclosure remains insufficient to show possession of the claimed computer-implemented functional limitations.
Furthermore, Applicant’s statement that a POSITA “may have selected various machine learning models depending on a particular use case scenario” confirms and is it a direct admission from the Applicant that the specification leaves to the reader the selection and design of the core algorithmic elements. That is the hallmark of an enablement-type argument, not written description. For written description, the applicant must demonstrate what they had in hand—e.g., representative species (specific model(s) and steps) or common structural features of the genus—at filing. See Ariad; MPEP 2163.02. A bare statement that different ML models could be used does not identify any representative model, features, training regimen, fusion method, or probability calibration sufficient to show possession of the claimed “profiling” and “confidence” computations.
Moreover, the claim requires “the confidence level is computed based on profiling motion data and audio data from the plurality of data streams.” The specification uses the word “profiling” but does not describe the profile construction steps (e.g., windowing, spectral features for audio, inertial features for motion, normalization), the fusion formula/architecture, or the mapping from fused features to a probability or range. Without such details, the claim’s critical “how” remains undescribed.
As further noted, Verbatim recitation in the specification does not itself satisfy written description for functional computer-implemented limitations. Applicant notes the specification “recites verbatim the features set forth in the claim.” For computer-implemented functional claims, MPEP 2161.01(I) makes clear that parroting claim language or naming a black‑box “interpreter” that “performs ML” does not, without more, demonstrate possession of the algorithm(s) or structure that accomplish the recited functions.
Applicant’s arguments with respect to claim(s) are rejected, under 35 USC 103(a), have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter.
On Pages 12-15 of remarks, Applicant argues that Singh, Champion, Kishi, and Khan do not teach “wherein the confidence level is computed based on profiling motion data and audio data from the plurality of data streams,” as amended in independent claims 1, 10, and 18.
Applicant’s arguments, with respect to the rejection(s) of claim(s) 1, 10 and 18 have been fully considered, however, upon further consideration, a new ground(s) of rejection is made in view of Soryal et al. (US 2023/0388796 A1), and Cheng et al. (US 2010/0262572 A1).
As to the dependent claims 2-9, 11-15, and 19-20, 22-23, these claims remain rejected by virtue of dependency to their independent claims. Therefore, the examiner maintains the rejection under 35 USC § 103.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. — The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-15, and 18-20, 22-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 10, and 18 recite the limitation “rendering in the metaverse environment and in a proximity of an avatar associated with the user” renders the claim indefinite because the language is a relative terminology. In addition, the language does not define a certainty and the metes and bounds of the claim are unascertainable. The measurement proximity of an avatar is not clear in order to define an avatar associated with the user. Therefore, it is unclear what is the intended scope of the claim invention.
Dependent claims 2-9, 11-15, and 19-20, 22-23 are rejected by virtue of dependency to independent claims 1, 10, and 18.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 1-15, 18-20 and 22-23 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The claims are rejected under 35 U.S.C. § 112(a) for lack of adequate written description support for the recited computer-implemented functional limitations, consistent with MPEP § 2161 and MPEP § 2161.01(I).
Claim 1 recites multiple computer-implemented functional limitations “determining whether the user is a human user or a software-based entity based on aggregating attribute information extracted from the plurality of data streams,” “generating a proof-of-life indicator … wherein the confidence level is computed based on profiling motion data and audio data,” and “rendering … an icon and a visual indicator … indicating the confidence level”). Claim 10 and 18 similarly recites the same computer functional claimed limitations.
MPEP § 2161.01(I) requires that, for functional limitations performed by a computer, the specification disclose either:
structural elements (modules, circuitry, components) that perform the function; or
an algorithm that describes how the function is performed (sufficient detail so that a person of ordinary skill in the art understands how to produce the claimed result), when the claim reads primarily to the function rather than to particular structure.
Although the specification discloses an “interpreter” (i.e. Fig 1/2, interpreter 154/220) that resides on a cloud or computer, a plurality of sensors (e.g., biometric sensors, IMU, microphones), and a high-level concept of using “machine learning (ML)” to aggregate sensor streams and to assign weights and compute a confidence level, the specification does not provide a sufficiently detailed disclosure of the structure/algorithm that implements the claimed functions.
The specification [0032] repeatedly states that the interpreter “performs machine learning (ML) to generate aggregated attribute information” and that the interpreter “has one or more machine learning (ML) model(s) that analyze an aggregated view of the different sensor data stream characteristics” and “may assign different weights to different sensor data streams.” These statements amounts to nothing more than desirable results (classification, weighting) and name a processing module (interpreter), but do not disclose the structure of the ML model(s), the algorithms, or representative steps used to convert the sensor data streams into features, the manner in which weights are computed or applied, the classifier architecture, the training data/labels, loss function, decision thresholds, or other algorithmic details that would show possession of the claimed means for performing the recited determination and confidence computation as required by MPEP § 2161.01(I).
The claim requires that the confidence level be “computed based on profiling motion data and audio data.” The specification describes “profiling motion data and/or audio” in generic terms but does not provide algorithmic detail of the profiling process (e.g., feature extraction steps, signal processing, statistical models, classification routines, how motion and audio features are fused, how the probability or probability range is computed or calibrated). Absent such detail, the specification does not show that the inventor possessed a particular way of performing the profiling and computing recited in the claim.
The claim recites rendering a visual indicator representing the confidence value “in proximity of an avatar.” The specification contains examples of rendering (e.g., heartbeat icon, color-coding, shading) and shows that PoLi may be presented visually; however, these presentation examples do not supply the missing algorithmic/structural support for the computational functions by which the confidence value is produced.
When examining computer-implemented functional claims, examiners should determine whether the specification discloses the computer and the algorithm (e.g., the necessary steps and/or flowcharts) that perform the claimed function in sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor possessed the claimed subject matter at the time of filing. Therefore, as in MPEP § 2161.01(I), when a claim recites a computer-implemented function in functional terms, in this case it is improper merely rely solely on broad, high-level statements desirable result that “use machine learning” or “apply ML models” without disclosing the structure or algorithmic steps by which those functions are accomplished.
The current specification provides high-level functional descriptions and examples, but does not disclose sufficient structure (beyond naming an “interpreter” module) or algorithms that demonstrate possession of the claimed means for performing the determination, computation, and generation of the PoLi as recited in the claim. Therefore, the claim is broader than the disclosed embodiments and lacks adequate written description support.
Corresponding and respective dependent claims fall together accordingly and are rejected for the same reasons as above.
New Claim 23, further recites computing the confidence level by comparing the motion data with a plurality of learned behaviors of the human user stored in a profile of the human user and by comparing the audio data with an audio profile of the human user; and adjusting the confidence level based on obtaining an additional data stream from at least one additional type of sensor configured to monitor the activity of the user within the metaverse environment.
For the same reasons above, the current specification provides high-level functional descriptions and examples, but does not disclose sufficient structure (beyond naming an “interpreter” module) or algorithms that demonstrate possession of the claimed means for performing the determination, computation, and generation of the PoLi as recited in the claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 and 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a method for auctioning goods or services which is considered a judicial exception because it falls under Certain Methods of Organizing Human Activity such as commercial or legal interactions including sales activities. This judicial exception is not integrated into a practical application as discussed below and the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception as discussed below.
This part of the eligibility analysis evaluates whether the claim falls within any statutory category. MPEP 2106.03. In claim(s) 6 the claims recite at least one step or act, including creating, deriving, and constructing. Thus, the claim is to a process, which is one of the statutory categories of invention.
Analysis
Step 1 (Statutory Categories) — 2019 PEG pq. 53
Claims 1-9 and 23 are directed to the statutory categories of invention.
Step 2A, Prong 1 (Do the claims recite an abstract idea?) — 2019 PEG pq. 54
Claim 1 recites the following types of subject matter that are judicial exceptions: Abstract idea — mental processes and data manipulation/analysis:
“determining whether the user is a human user or a software-based entity based on aggregating attribute information extracted from the plurality of data streams” (analyzing and classifying data);
“generating a proof-of-life indicator that indicates whether the user is determined to be the human user or the software-based entity and a confidence level associated with the proof-of-life indicator, wherein the confidence level is computed based on profiling motion data and audio data from the plurality of data streams” (computing a probability/confidence value);
“rendering, in the metaverse environment and in a proximity of an avatar associated with the user, an icon and a visual indicator, wherein the icon indicates whether the user is the human user and the visual indicator indicates the confidence level, the confidence level being a probability value or a range of probability values that the user is the human user” (presenting information).
These limitations recite data collection, data processing/classification, mathematical operations (computation of probability/confidence), and presentation of results — concepts that fall within the judicial exceptions of “mental processes” and “mathematical concepts” (see PEG Step 2A, Examples and categories of abstract ideas). It is noted that a probability value or range of probability values is a numerical calculation derived from mathematical relationships. The claim does not recite any specific technological mechanism for computing the confidence level, rather, it relies on the result of a mathematical evaluation indicating the likelihood that a user is human. Moreover, the calculation of a confidence level as a probability can be performed using pen-and- paper mathematical analysis, further confirming that the claimed concept is abstract.
Step 2A, Prong 2 (Does the claim recite additional elements that integrate the judicial exception into a practical application?) - 2019 PEG pq. 54
Although the specification names physical components (plurality of sensors such as biometric sensors, IMU/motion sensors, microphones, user devices, and an “interpreter” module) and a metaverse environment where “rendering” output is rendered near an avatar, the claim merely uses these conventional components to perform the abstract tasks of data collection, application of an unspecified ML model for classification/weighting, computing a confidence/probability, and presentation of a visual icon. Claim 1 does not describe a particular improvement to the functioning of the computer or other technology, nor does it recite a specific technical implementation or non‑generic arrangement of components that meaningfully limits the claim to an applied technological solution.
The mere presence of “obtaining a plurality of data streams from a plurality of sensors” (collecting data);” generic physical components (sensors, processors, rendering displays) and the use of “machine learning (ML)” in a generic way are insufficient to integrate the abstract idea into a practical application under PEG. The specification does not set forth specific algorithms, specialized signal-processing or sensor-fusion techniques, a particular ML architecture or training regimen, or other technical improvements that would show the claimed subject matter provides a particular, concrete technological enhancement to the metaverse system or to computing technology itself.
Therefore, claim 1 is directed to an abstract idea and is not integrated into a practical application under Step 2A.
Step 2B (Does the claim recite additional elements that amount to significantly more than the judicial exception?) - 2019 PEG pq. 56
The recited “plurality of sensors” includes well-known types of sensors (biometric monitors, motion sensors/IMU, microphones) and are used in the ordinary manner to collect sensor data. The claim’s limitation that aggregates sensor streams according to the spec is using machine learning (ML) is recited at a high level (i.e. using a processor as a tool) without identifying a specific ML architecture, feature‑extraction algorithm, fusion formula, training data, or other algorithmic detail that would meaningfully limit the claim or show a technical improvement. The specification’s references to weighting sensor streams and to aggregated attribute information are high‑level descriptions and do not supply the required unconventional technical implementation. The step of “rendering … an icon and a visual indicator” proximate an avatar is a conventional user interface/display activity, that correspond to an insignificant extra solution activity.
Because the claim 1 merely instructs practitioners to implement the abstract idea using routine, conventional components and high-level ML functionality, and does not recite a specific, non‑conventional manner of performing the claimed steps (e.g., a concrete sensor-fusion algorithm, a particular ML model and training approach, specialized real-time processing architecture, or secure hardware attestation that materially improves system operation), the claim does not provide an “inventive concept” sufficient to amount to significantly more than the abstract idea.
Accordingly, under Step 2B of the PEG, the claim 1 is not patent eligible.
Dependent claims 2-9 and 23 are rejected by virtue of dependency to independent claim 1.
Dependent claims — analysis and reasons for rejection
Claim 2 (Original): “wherein providing the proof-of-life indicator in the metaverse environment includes: configuring each of a plurality of avatars in a virtual space of the metaverse environment based on a respective proof-of-life indicator for each of the plurality of avatars.”
Step 2A, Prong 1: This claim depends from claim 1 and incorporates the abstract idea and a mathematical concept identified therein. Accordingly, a “proof-of-life indicator” necessarily represents the result of an evaluation, such as: a logical determination, a score, and a probability comparison. Such determinations constitute a mental process and a mathematical calculations involving evaluation and decision making that can be performed by a human using comparison or pen-and-paper analysis.
Step 2A, Prong 2: The additional limitation “configuring avatars … based on [the] indicator” is a rule for applying/presenting the classification result within a virtual space. It amounts to using the abstract result to drive presentation/behavior in a user interface. It does not recite a specific improvement to computer technology or a non-generic arrangement of components. See 2019 PEG ¶ 54; Electric Power Group (display of results is abstract). Although the claim recites “metaverse environment, virtual space, and avatars”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve avatar rendering technology.
Step 2B: Configuring avatars according to results is a conventional post-solution activity in graphical systems (UI logic). Further using an “indicator” to determine the configuration is routine conditional logic. No recited unconventional technology is present. No inventive concept. Claim 2 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 2 is directed to an abstract idea and a mathematical process.
Claim 3 (Original): “wherein configuring each … includes at least one of: hiding … at least a first avatar … that has an associated proof-of-life indicator indicating control by the software-based entity, or modifying an appearance of a respective avatar … that has the associated proof-of-life indicator indicating control by the software-based entity.”
Step 2A, Prong 1: Incorporates abstract idea and a mathematical concept from claim 1. The limitation requires evaluating a “proof-of-life indicator” to determine whether an avatar is controlled by a software-based entity and hiding the avatar or modifying its appearance. Such determinations constitute a mental process and a mathematical calculations involving evaluation and decision making that can be performed by a human using comparison or pen-and-paper analysis. Therefor, claim 3 does not recite any specific technical mechanism for generating the proof-of-life indicator.
Step 2A, Prong 2: The added behaviors (hiding or modifying appearance based on the indicator) are presentation/display rules and display conditioning of UI objects. They do not change how the computer or rendering engine operates at a technical level; they present the classification outcome. See 2019 PEG (presentation of information), and cases such as Interval Licensing v. AOL (information display), SAP v. InvestPic (mathematical processing with result presentation). Although the claim recites “hiding an avatar, and modifying an avatar’s appearance”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve avatar rendering technology.
Step 2B: Hiding or modifying an avatar visual attribute based on a result is conventional UI logic. No significantly more. Claim 3 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 3 is directed to an abstract idea and a mathematical process.
Claim 4 (Original): “wherein configuring each … includes modifying the appearance … by: determining an attribute … from a plurality of attributes based on the confidence level … including different colors, shadings, or appearances for a particular range of values of the confidence level.”
Step 2A, Prong 1: Incorporates abstract idea from claim 1. The limitation requires evaluating a confidence level, and determining an attribute. Such determinations constitute a mental process and a mathematical calculations involving evaluation and decision making that can be performed by a human using comparison or pen-and-paper analysis. Therefore, claim 4 does not recite any specific technical mechanism and relies on the result of a mathematical evaluation to drive a decision.
Step 2A, Prong 2: Mapping numeric output to UI attributes (colors/shading) is a classic presentation-of-information operation. It does not recite a technical improvement such as a new rendering technique, GPU pipeline, or other computer-functionality enhancement. See 2019 PEG (presentation). Although the claim recites “metaverse environment, attributes including different colors, shadings, or appearances…”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve avatar rendering technology.
Step 2B: Assigning colors/shading based on ranges is routine UI convention. No inventive concept. Claim 4 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 4 is directed to an abstract idea and a mathematical process.
Claim 5 (Currently Amended): “wherein determining … includes: determining, at a client, … based on biometric data … determining, at the client, … based on profiling the motion data …; and determining, at the client, … based on profiling an audio stream ….”.
Step 2A, Prong 1: Incorporates abstract idea from claim 1 (classification and probability computation). Accordingly, “obtaining a biometric data, motion data, and audio stream data, profiling such data and determining … whether the user is human user or the software based…” necessarily represents the result of an evaluation, such as: extracting features, comparing data patterns, applying thresholds and reaching a decision based on the mathematical evaluations. Such determinations constitute a mental process and a mathematical calculations involving evaluation and decision making that can be performed by a human using comparison or pen-and-paper analysis.
Step 2A, Prong 2: Reciting “at a client” is a location of processing on generic client hardware. “Profiling” biometric, motion, and audio streams is still high-level functional analysis without recited specific algorithms, architectures, or improvements to the client device operation (e.g., latency management, signal-processing detail, classifier structure). Field-of-use and generic computation do not integrate the abstract idea into a practical application. See Alice; 2019 PEG ¶ 54. Although the claim recites “biometric sensors, user devices , a metaverse environment”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve avatar rendering technology.
Step 2B: The use of client devices to analyze sensor streams generally is well-understood, routine, and conventional. The claim does not recite a non-conventional technical implementation. No inventive concept. Claim 5 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 5 is directed to an abstract idea and a mathematical process.
Claim 6 (Original): “wherein determining … includes: increasing or decreasing the confidence level … based on a number and types of the plurality of sensors.”
Step 2A, Prong 1: This claim depends from claim 1 and incorporates the abstract idea and a mathematical concept identified therein. The limitation requires a “confidence level, increasing or decreasing… based on the number and types of sensors”. These constitutes mathematical relationships. Such operations are examples of mathematical calculations that can be performed by a human using comparison or pen-and-paper analysis.
Step 2A, Prong 2: Adjusting a probability/score by the number/type of sensors is a mathematical/heuristic rule for weighting; it does not improve computer technology or recite a specific technical implementation. It is result-oriented. Although the claim recites “number/type of sensors…”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve avatar rendering technology nor improve sensor hardware.
Step 2B: Heuristic re-weighting is conventional in classification systems; no non-conventional elements are claimed. No inventive concept. Claim 6 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 6 is directed to an abstract idea and a mathematical process.
Claim 7 (Original): “wherein determining … includes: generating an aggregated user profile by combining the attribute information; and determining the confidence level based on the aggregated user profile.”
Step 2A, Prong 1: This claim depends from claim 1 and incorporates the abstract idea and a mathematical concept identified therein. The limitation requires combining attribute information, and determining a confidence level necessarily represents the result of an evaluation, such as: a logical determination, a score, and a probability comparison. Such determinations constitute a mental process and a mathematical calculations involving evaluation and decision making that can be performed by a human using comparison or pen-and-paper analysis.
Step 2A, Prong 2: “Aggregated user profile” and “combining attribute information” are generic data aggregation steps; computing confidence from the aggregation is a mathematical concept. No specific technical mechanism that improves technology is recited. Although the claim recites “a user profile, and a confidence level”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve computer performance.
Step 2B: Data aggregation and probability computation are routine for classification. No inventive concept. Claim 7 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 7 is directed to an abstract idea and a mathematical process.
Claim 8 (Previously Presented): “wherein providing the proof-of-life indicator includes: providing the proof-of-life indicator together with attributes of the user to a target virtual space in the metaverse environment.”
Step 2A, Prong 1: This claim depends from claim 1 and incorporates the abstract idea and a mathematical concept identified therein. Accordingly, a “proof-of-life indicator…providing the indicator with the user attributes to a target virtual space” necessarily represents the result of a providing or transmitting information, including calculated indicator… is a form of abstract data handling. Such determinations constitute a mental process and a mathematical calculations involving evaluation and decision making that can be performed by a human using comparison or pen-and-paper analysis.
Step 2A, Prong 2: Supplying the indicator with user attributes to a “target virtual space” is data routing/presentation within a field-of-use. It does not recite a technical improvement (no new networking, storage, rendering techniques). Although the claim recites “metaverse environment, target virtual space”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve avatar rendering technology.
Step 2B: Sending the indicator to a target space is a conventional application of results in a UI/system context. No inventive concept. Claim 8 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 8 is directed to an abstract idea and a mathematical process.
Claim 9 (Original): “further comprising: updating the confidence level … based on user interactions in one or more virtual spaces in the metaverse environment.”
Step 2A, Prong 1: This claim depends from claim 1 and incorporates the abstract idea and a mathematical concept identified therein. Accordingly, a “a confidence level, updating the value…” necessarily represents the result of evaluating interaction data, such as: a mathematical calculation. Such determinations constitute a mental process and a mathematical calculations involving evaluation and decision making that can be performed by a human using comparison or pen-and-paper analysis.
Step 2A, Prong 2: Updating a computed probability/score based on additional interactions is a mathematical refinement of the classification outcome. No recited improvement to computer functioning or non-generic implementation. Although the claim recites “user interactions, and virtual spaces in the metaverse environment”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve avatar rendering technology.
Step 2B: Dynamic updating of confidence based on new data is routine in analytics systems. No inventive concept. Claim 9 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 9 is directed to an abstract idea and a mathematical process.
Claim 23 (New): “further comprising: computing the confidence level by comparing the motion data with a plurality of learned behaviors of the human user stored in a profile of the human user and by comparing the audio data with an audio profile of the human user; and adjusting the confidence level based on obtaining an additional data stream from at least one additional type of sensor ….”
Step 2A, Prong 1: This claim depends from claim 1 and incorporates the abstract idea and a mathematical concept identified therein. The limitation requires “comparing motion data with stores learned behaviors, comparing audio data with an audio profile and adjusting a confidence level”. These steps involve: pattern comparison, gathering of comparison results and adjustment of a confidence value. Such determinations constitute a mental process and a mathematical calculations involving evaluation and decision making that can be performed by a human using comparison or pen-and-paper analysis.
Step 2A, Prong 2: Comparing current data to “learned behaviors” or an “audio profile” and adjusting the confidence upon receiving additional sensor data are mathematical/analytical operations at a high level of generality. The claim does not recite a particular model structure (e.g., defined features, classifier architecture, similarity metric), nor a particular sensor-fusion algorithm, nor any specific technical improvement in how the computer or metaverse system functions. As such, these do not integrate the abstract idea into a practical application. See 2019 PEG ¶ 54; Electric Power Group. Although the claim recites “motion data, audio data, learning behavior profile, and metaverse environment”. These elements merely provide a field of use for the abstract idea. Thus, the claim does not improve motion sensing.
Step 2B: Maintaining profiles of user behaviors and adjusting confidence with additional sensor streams are routine practices and are recited generically. The claim lacks an unconventional technical implementation that amounts to significantly more than the abstract idea. No inventive concept. Claim 23 does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 23 is directed to an abstract idea and a mathematical process.
Well-understood, routine, conventional (WURC) findings
The specification describes standard hardware components used in ordinary ways (sensors such as motion sensors/IMU, microphones, headsets, haptic gloves, client devices, rendering engines) for monitoring, computing, and displaying indicators in a virtual environment, indicating that these elements are conventional tools for data collection and UI presentation. See MPEP 2106.05(d). The claims recite these conventional components at a high level without specifying any non-conventional configuration or operation.
Conclusion for dependent claims
For each of claims 2–9 and 23 recites similar limitations as outlined above, the additional limitations do not integrate the abstract idea into a practical application and do not recite an inventive concept that is significantly more than the abstract idea itself. Accordingly, claims 2–9 and 23 are rejected under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Soryal et al. (US 2023/0388796 A1), hereinafter Soryal in view of Cheng et al. (US 2010/0262572 A1), hereinafter Cheng.
In regards to claim 1, Soryal discloses a method comprising: obtaining a plurality of data streams from a plurality of sensors that are configured to monitor activity of a user that is active within a metaverse environment (Soryal, Para. 0023, the system provides a “Proof of Presence” or continuous authentication that ensures only the authorized user is still in control of the virtual entity in the Metaverse, and therefore ensures that the virtual entity's interactions commitments, speech, deeds, professional games, access private places, etc. are genuinely created by the real-world user) and (Soryal, Para. 0028, the cameras and sensors monitoring the physical space where the user equipment is located provide the following information to PVS 203: Face, facial expressions, etc. body parts, body structure and expressions, body movements when controlling the user equipment (joystick, goggles, mouse, keyboard, etc.). The way the user holds the user equipment with their hand and fingers. Voice and vocal expressions) and (Soryal, Para. 0026, PVS 203 includes a first data feed 204 from sensors at the first user equipment 201 and a second data feed 205 from sensors at the second user equipment 202. Such sensors may include cameras, sensors, haptic devices, joysticks, or the like (not illustrated) that provide an indication of the presence and/or identity of user 1 or detect a lack of continuity of the operation of the respective user equipment) and (Soryal, Para. 0029, The cameras and sensors monitoring the physical space where the user equipment is located provide the following information to PVS 203: Face, facial expressions, etc.body parts, body structure and expressions, body movements when controlling the user equipment (joystick, goggles, mouse, keyboard, etc.) The way the user holds the user equipment with their hand and fingers Voice and vocal expressions) and (Soryal, Para. 0033, PVS 203 ensures the identity of the operator from these aspects. PVS 203 ensures that speech uttered by the operator in the physical world is the same speech heard in the virtual reality environment 210), wherein the plurality of data streams relate to behavioral and biometric characteristics of the user that is interacting within the metaverse environment (Soryal, Para. 0027, this virtual data feed 206 from inside virtual reality environment 210 is like another invisible user placed inside the virtual space that provides audio and video from the virtual reality environment for monitoring the virtual entities inside that virtual space to PVS 203. Virtual camera 213 monitors the behavior of first virtual entity 211 and second virtual entity 212 to ensure that real-world actions by the operators of first user equipment 201 and second user equipment 202 are correlated with the respective virtual entity's actions. For example, PVS 203 observes the virtual entity's behavior/movements/gestures/etc);
determining whether the user is a human user or a software-based entity based on aggregating attribute information extracted from the plurality of data streams (Soryal, Para. 0023, in fact, the operation of some virtual entities may be purely autonomous, i.e., a mere manifestation of software running on a computer. The following disclosure illustrates a system and method to authenticate users as their virtual presence enters a virtual reality, also referred to as the “Metaverse,” but also authenticates their presence for the whole period that they are in the Metaverse. The system provides a “Proof of Presence” or continuous authentication that ensures only the authorized user is still in control of the virtual entity in the Metaverse, and therefore ensures that the virtual entity's interactions commitments, speech, deeds, professional games, access private places, etc. are genuinely created by the real-world user); wherein the confidence level is computed based on profiling motion data and audio data from the plurality of data streams (Soryal, Para. 0035, PVS 203 may develop a user profile with certain characteristics of the real-world users to further ensure that the identity of the real-world users is genuine. For example, biometric verification data of the user by sensors at the user equipment, such as facial recognition parameters, personal kinematics, biometric user interface analysis of input signal signatures, voice print analysis, two-factor authentication, keystroke analysis, mouse movements, or the like, may be stored in the profile for later comparison and authentication of the user), and
Soryal discloses determining whether a virtual entity is controlled by a human user or a software-based entity by correlating/aggregating signals from user equipment with actions of the virtual entity (¶¶ 23, 27, 24: “Proof of Presence,” continuous authentication, correlation, issuing a warning if uncorrelated or user not present), and profiling motion and audio in the process (¶ 35: personal kinematics, voice print analysis; biometric user interface analysis) and Soryal also teaches providing indications (“renderings”), including altering or removing the avatar (e.g., blurring the face) and issuing warnings when authentication fails (¶¶ 34, 41, 44–45).
Soryal does not expressly disclose rendering, proximate to an avatar, an icon and a numeric visual confidence indicator of human control (as a probability value or range). Soryal does not explicitly disclose the claimed generating a proof-of-life indicator that indicates whether the user is the human user or the software-based entity and a confidence level associated with the proof-of-life indicator, and rendering in the metaverse environment and in a proximity of an avatar associated with the user, an icon and a visual indicator, wherein the icon indicates whether the user is the human user and the visual indicator indicates the confidence level, the confidence level being a probability value or a range of probability values that the user is the human user.
However, Cheng teaches rendering avatar-specific numeric indicators proximate to avatars in a virtual world UI, for example a per-avatar “authenticity score” displayed in a bubble floating above the avatar (¶ 42: calculation of a numerical score; ¶ 64: display of the score 330 above each avatar; FIG. 3). Cheng teaches generating a proof-ofwhether the user is the human user (Cheng, Para. 0042, the representational authenticity handler 155 can utilize a preset algorithm upon the user representational authenticity data 145 to calculate a user representational authenticity score 180. The user representational authenticity score 180 can numerically quantify the degree of representational authenticity between the user's 105 collected user representational authenticity data 145 and their virtual representation 120) and (Cheng, Para. 0064, each avatar 310 shown in the user interface 300 can have the value for their user representational authenticity score 330 presented in a bubble floating above their head),
and rendering in the metaverse environment and in a proximity of an avatar associated with the user, an icon and a visual indicator, wherein the icon indicates whether the user is the (Cheng, Para. 0042, additionally, the representational authenticity handler 155 can utilize a preset algorithm upon the user representational authenticity data 145 to calculate a user representational authenticity score 180. The user representational authenticity score 180 can numerically quantify the degree of representational authenticity between the user's 105 collected user representational authenticity data 145 and their virtual representation 120) and (Cheng, Para. 0064, each avatar 310 shown in the user interface 300 can have the value for their user representational authenticity score 330 presented in a bubble floating above their head) and the visual indicator indicates the confidence level, the confidence level being a probability value or a range of probability values that the user is the (Cheng, Para. 0064, each avatar 310 shown in the user interface 300 can have the value for their user representational authenticity score 330 presented in a bubble floating above their head).
Soryal and Cheng are both considered to be analogous to the claim invention because they are in the same field of virtual environment, in particular user experience software-based in the virtual world environment. Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify Soryal to compute and render a numeric confidence indicator of the presence verification (human vs. software control) proximate to the avatar, following the UI pattern taught by Cheng (numeric score above the avatar) to include generating a proof-of-life indicator that indicates whether the user is the human user or the software-based entity and a confidence level associated with the proof-of-life indicator (Cheng, Para. 0042) and (Cheng, Para. 0064), and rendering in the metaverse environment and in a proximity of an avatar associated with the user, an icon and a visual indicator, wherein the icon indicates whether the user is the human user (Cheng, Para. 0064) and the visual indicator indicates the confidence level, the confidence level being a probability value or a range of probability values that the user is the human user (Cheng, Para. 0064). Doing so would aid to define how closely a user's virtual representation in the virtual world corresponds to the user's actual appearance. A user's representational authenticity can be represented by corresponding user representational authenticity data, which can be collected and validated by a third-party agency and/or by automated mechanisms (Cheng, Para. 0012).
Doing so would predictably improve the in-world user interface by providing participants with immediate, granular feedback on the avatar’s presence verification status (Soryal’s warnings/blurring) and is a straightforward application of Cheng’s display technique to Soryal’s determination, yielding no unexpected results (KSR). Representing the confidence as a probability value or range is as evidenced by Cheng’s a known common and obvious format for classifier confidence. See also Soryal ¶¶ 23–24 (issuing messages/warnings) and Cheng ¶ 64 (numeric display near avatar). The ordinary skilled in the art would have been motivated to improve in-world display of Soryal’s status determination as Cheng demonstrates exactly that UI idiom (numeric above avatar) in the same technological environment (virtual worlds). The combination is a predictable use of prior art elements.
In regards to claim 2, the combination of Soryal and Cheng teaches the method of claim 1, wherein providing the proof-of-life indicator in the metaverse environment includes: configuring each of a plurality of avatars in a virtual space of the metaverse environment based on a respective proof-of-life indicator for each of the plurality of avatars (Cheng, Fig. 3, Para. 0064, the user interface 300 can be configured to present elements specific to the use of user representational authenticity data. As shown in this example, each avatar 310 shown in the user interface 300 can have the value for their user representational authenticity score 330 presented in a bubble floating above their head ). Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Soryal to incorporate the teachings of Cheng to include wherein providing the proof-of-life indicator in the metaverse environment includes: configuring each of a plurality of avatars in a virtual space of the metaverse environment based on a respective proof-of-life indicator for each of the plurality of avatars (Cheng, Fig. 3, Para. 0064). Doing so would aid to define how closely a user's virtual representation in the virtual world corresponds to the user's actual appearance. A user's representational authenticity can be represented by corresponding user representational authenticity data, which can be collected and validated by a third-party agency and/or by automated mechanisms (Cheng, Para. 0012).
In regards to claim 3, the combination of Soryal and Cheng teaches the method of claim 2, wherein configuring each of the plurality of avatars in the virtual space includes at least one of: hiding, in the virtual space, at least a first avatar of the plurality of avatars that has an associated proof-of-life indicator indicating control by the software-based entity, or modifying an appearance of a respective avatar from the plurality of avatars in the virtual space that has the associated proof-of-life indicator indicating control by the software-based entity (Soryal, Para. 0034, if the identity of user 1 cannot be verified, for example by sensors at first user equipment 201, then PVS 203 will cause a noticeable change in the appearance of first virtual entity 211 within virtual reality environment 210).
In regards to claim 4, the combination of Soryal and Cheng teaches the method of claim 3, wherein configuring each of the plurality of avatars includes modifying the appearance of the respective avatar from the plurality of avatars by: determining an attribute for the respective avatar from a plurality of attributes based on the confidence level of the associated proof-of-life indicator, the plurality of attributes including different colors, shadings, or appearances for a particular range of values of the confidence level (Cheng, Para. 0030, the user representational authenticity data 145 can represent data elements that quantify certain physical characteristics of the user 105. Examples of user representational authenticity data 145 can include, but are not limited to, height, weight, eye color, date of birth, body measurements, hair color, tattoos, body-piercings, and the like). Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Soryal to incorporate the teachings of Cheng to include wherein configuring each of the plurality of avatars includes modifying the appearance of the respective avatar from the plurality of avatars by: determining an attribute for the respective avatar from a plurality of attributes based on the confidence level of the associated proof-of-life indicator, the plurality of attributes including different colors, shadings, or appearances for a particular range of values of the confidence level (Cheng, Para. 0030). Doing so would aid to define how closely a user's virtual representation in the virtual world corresponds to the user's actual appearance. A user's representational authenticity can be represented by corresponding user representational authenticity data, which can be collected and validated by a third-party agency and/or by automated mechanisms (Cheng, Para. 0012).
In regards to claim 5, the combination of Soryal and Cheng teaches the method of claim 1, wherein determining whether the user is the human user or the software-based entity includes: determining, at a client, whether the user is the human user or the software-based entity based on biometric data obtained from one or more biometric sensors of at least one user device (Soryal, Para. 0035, biometric verification data of the user by sensors at the user equipment, such as facial recognition parameters, personal kinematics, biometric user interface analysis of input signal signatures, voice print analysis, two-factor authentication, keystroke analysis, mouse movements, or the like, may be stored in the profile for later comparison and authentication of the user); determining, at the client, whether the user is the human user or the software-based entity based on profiling motion data obtained from the at least one user device during interactions of the user within the metaverse environment (Soryal, Para. 0087, the motion sensor 618 can utilize motion sensing technology such as an accelerometer, a gyroscope, or other suitable motion sensing technology to detect motion of the communication device 600 in three-dimensional space); and determining, at the client, whether the user is the human user or the software-based entity based on profiling an audio stream obtained from the at least one user device during the interactions of the user within the metaverse environment (Soryal, Para. 0027, this virtual data feed 206 from inside virtual reality environment 210 is like another invisible user placed inside the virtual space that provides audio and video from the virtual reality environment for monitoring the virtual entities inside that virtual space to PVS 203. Virtual camera 213 monitors the behavior of first virtual entity 211 and second virtual entity 212 to ensure that real-world actions by the operators of first user equipment 201 and second user equipment 202 are correlated with the respective virtual entity's actions).
In regards to claim 6, the combination of Soryal and Cheng teaches the method of claim 1, wherein determining whether the user is the human user or the software-based entity includes: increasing or decreasing the confidence level associated with the proof-of-life indicator based on a number and types of the plurality of sensors (Cheng, Para. 0036, the various levels of user representational authenticity data 145 collected can affect the user's 105 ability to interact within the virtual world environment 150. That is, a user 105 missing certain detailed user representational authenticity data 145 can be automatically discounted by the user-level representation-based interaction rules 175 of other users 105. For example, because User Bob 105 did not opt to have his body fat percentage measured, User Jane's 105 user-level representation-based interaction rule 175 for ignoring others whose body fat percentage is greater than 40% will automatically reject User Bob's 105 chat request ). Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Soryal to incorporate the teachings of Cheng to include wherein configuring each of the plurality of avatars includes modifying the appearance of the respective avatar from the plurality of avatars by: determining an attribute for the respective avatar from a plurality of attributes based on the confidence level of the associated proof-of-life indicator, the plurality of attributes including different colors, shadings, or appearances for a particular range of values of the confidence level (Cheng, Para. 0030). Doing so would aid to define how closely a user's virtual representation in the virtual world corresponds to the user's actual appearance. A user's representational authenticity can be represented by corresponding user representational authenticity data, which can be collected and validated by a third-party agency and/or by automated mechanisms (Cheng, Para. 0012).
In regards to claim 7, the combination of Soryal and Cheng teaches the method of claim 1, wherein determining whether the user is the human user or the software-based entity includes: generating an aggregated user profile by combining the attribute information; and determining the confidence level based on the aggregated user profile (Cheng, Para. 0035, for example, a user 105 can elect to have data elements categorized as “Basic”, such as height and weight, whereas additional data elements, such as body fat percentage and MYERS-BRIGG personality profile, can be collected for a user 105 purchasing an “Advanced” collection package. It should be noted that the actual collection of user representational authenticity data 145 is not a focus for this embodiment of the present invention). Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Soryal to incorporate the teachings of Cheng to include wherein determining whether the user is the human user or the software-based entity includes: generating an aggregated user profile by combining the attribute information; and determining the confidence level based on the aggregated user profile (Cheng, Para. 0035). Doing so would aid to define how closely a user's virtual representation in the virtual world corresponds to the user's actual appearance. A user's representational authenticity can be represented by corresponding user representational authenticity data, which can be collected and validated by a third-party agency and/or by automated mechanisms (Cheng, Para. 0012).
In regards to claim 8, the combination of Soryal and Cheng teaches the method of claim 1, wherein providing the proof-of-life indicator includes: providing the proof-of-life indicator together with other attributes of the user to a target virtual space in the metaverse environment (Cheng, Para. 0067, only users 305 whose user representational authenticity data indicates an age greater than or equal to eighteen and whose calculated user representational authenticity score 330 is greater than or equal to forty-five are allowed to enter the coffee house 315. It can then be logically follow that the avatars 310 already inside the coffee house 315 meet the entry requirements 325). Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Soryal to incorporate the teachings of Cheng to include wherein providing the proof-of-life indicator includes: providing the proof-of-life indicator together with other attributes of the user to a target virtual space in the metaverse environment (Cheng, Para. 0067). Doing so would aid to define how closely a user's virtual representation in the virtual world corresponds to the user's actual appearance. A user's representational authenticity can be represented by corresponding user representational authenticity data, which can be collected and validated by a third-party agency and/or by automated mechanisms (Cheng, Para. 0012).
In regards to claim 9, the combination of Soryal and Cheng teaches the method of claim 8, further comprising: updating the confidence level associated with the proof-of-life indicator for the user based on user interactions in one or more virtual spaces in the metaverse environment (Cheng, Para. 0065, It should be noted that the presentation of a user's 305 user representational authenticity score 330 can vary based upon the implementation within the virtual world environment and/or the preferences defined by a user's 305 user-level representation-based interaction rules). Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Soryal to incorporate the teachings of Cheng to include updating the confidence level associated with the proof-of-life indicator for the user based on user interactions in one or more virtual spaces in the metaverse environment (Cheng, Para. 0065). Doing so would aid to define how closely a user's virtual representation in the virtual world corresponds to the user's actual appearance. A user's representational authenticity can be represented by corresponding user representational authenticity data, which can be collected and validated by a third-party agency and/or by automated mechanisms (Cheng, Para. 0012).
In regards to claim 10, the apparatus of claim 10 is similarly analyzed and rejected as the method claim 1.
In regards to claim 11, the apparatus of claim 11 is similarly analyzed and rejected as the method claim 2.
In regards to claim 12, the apparatus of claim 12 is similarly analyzed and rejected as the method claim 3.
In regards to claim 13, the apparatus of claim 13 is similarly analyzed and rejected as the method claim 4.
In regards to claim 14, the apparatus of claim 14 is similarly analyzed and rejected as the method claim 5.
In regards to claim 15, the apparatus of claim 15 is similarly analyzed and rejected as the method claim 6.
In regards to claim 16, the apparatus of claim 16 is similarly analyzed and rejected as the method claim 7.
In regards to claim 17, the apparatus of claim 17 is similarly analyzed and rejected as the method claim 8.
In regards to claim 18, the non-transitory computer readable storage media of claim 18 is similarly analyzed and rejected as the method claim 1 and apparatus claim 10.
In regards to claim 19, the non-transitory computer readable storage media of claim 19 is similarly analyzed and rejected as the method claim 2 and apparatus claim 11.
In regards to claim 20, the non-transitory computer readable storage media of claim 20 is similarly analyzed and rejected as the method claim 3 and apparatus claim 12.
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Soryal et al. (US 2023/0388796 A1), hereinafter Soryal in view of Cheng et al. (US 2010/0262572 A1), hereinafter Cheng, and further in view of Khan (US 11,533,619 B1), hereinafter Khan.
In regards to claim 22, the combination of Soryal and Cheng does not explicitly disclose the method of claim 5, wherein determining whether the user is the human user or the software-based entity includes: generating aggregated attribute information by performing machine learning of the biometric data, profiling of the motion data, and profiling of the audio stream, wherein different weights are assigned to different sensor data streams from a plurality of biometric sensors of the at least one user device.
However, Khan teaches wherein determining whether the user is the human user or the software-based entity includes: generating aggregated attribute information by performing machine learning of the biometric data, profiling of the motion data (Khan, Col. 27, Lines 50-67, and Col. 28, Lines 1-3, the exemplary trained neural network model may be trained based training data set(s) derived from historical and/or present activity data related to one or more activities performed within the IAPP, including one or more of: user profile data, including, without limitation, activity preferences, contextual information associated with the user profile data, activity tracking data, geographical identifier(s), Internet Service Provider's geographic location, type of activities, social impact factors, etc. In some embodiments, historical and/or present activity data may be activity data related to one or more activities performed by BOTs only (BOT activity data). In some embodiments, historical and/or present activity data may be activity data related to one or more activities performed by both BOTs and physical users (e.g., users who successfully completed the Start Random Challenge). In some embodiments, historical and/or present activity data may be activity data related to one or more activities performed by physical users only (e.g., users who successfully completed the Start Random Challenge)), and profiling of the motion data, and profiling of the audio stream, wherein different weights are assigned to different sensor data streams from a plurality of biometric sensors of the at least one user device (Khan, Col. 25, Lines 21-49, using a percentage confidence-based weighted average score, by measuring the inputs (e.g. “likes”, retweets, hash-tags etc.) from users that have been authenticated as humans by the random challenge and response method and system disclosed, either numerically, as a percentage of the total postings in that category as a measuring scale, or graphically, where in some embodiments, the ranking is portrayed as an arc of a circle where a fully closed circle represents 100% and where a half moon circle represents 50%, a quarter moon represents 25%, etc. For example: News/Orange/50%, indicating a 50% level of trust in a new channel reported by a BOT, rated by an algorithm that computes the percentage of authenticated users who have tagged, followed, retweeted, hash-tagged etc. the news channel, collectively on all news items, and discretely on individually reported events, and significantly, the algorithm excluding or distinctly indicating separately, any ratings/recommendations by BOTS, detected and tagged as such on failing to respond to the authentication challenge as disclosed herein; Marketing/Green/90%, indicating a 90% level of confidence in recommending a service or product promoted by BOTS or actual humans, as rated by authenticated actual humans (users) and significantly, by excluding or distinctly indicating separately, any recommendations by BOTS, detected and tagged as such, on failing to respond to the authentication challenge as disclosed herein). Soryal, Cheng and Khan are all considered to be analogous to the claim invention because they are in the same field of determining whether the avatar is a human user or a software-based entity in order to participate in the virtual world. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Soryal and Cheng to incorporate the teachings of Khan to include wherein determining whether the user is the human user or the software-based entity includes: generating aggregated attribute information by performing machine learning of the biometric data (Khan, Col. 27, Lines 50-67, and Col. 28, Lines 1-3), profiling of the motion data, and profiling of the audio stream, wherein different weights are assigned to different sensor data streams from a plurality of biometric sensors of the at least one user device (Khan, Col. 25, Lines 21-49). Doing so would aid to leverage the secure Mobile Originating Signaling capability on digital networks to conduct cellular-based authentication that would deliver a trusted, seamless and frictionless user identity authentication for access to digital products and/services at scale (Khan, Col. 13, Lines 22-28).
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Soryal et al. (US 2023/0388796 A1), hereinafter Soryal in view of Cheng et al. (US 2010/0262572 A1), hereinafter Cheng, and further in view of Kim et al. (US 2023/0403270 A1), hereinafter Kim.
In regards to claim 23, the combination of Soryal and Cheng does not explicitly teach further comprising:
computing the confidence level by comparing the motion data with a plurality of learned behaviors of the human user stored in a profile of the human user and by comparing the audio data with an audio profile of the human user; and adjusting the confidence level based on obtaining an additional data stream from at least one additional type of sensor configured to monitor the activity of the user within the metaverse environment.
However, Kim teaches computing the confidence level by comparing the motion data with a plurality of learned behaviors of the human user stored in a profile of the human user and by comparing the audio data with an audio profile of the human user (Kim, Para. 0004, At step 412, the processor 138 of server 106 may have determined that the confidence threshold has been satisfied through comparison of the session data 128 and user profile 118 in step 408); and adjusting the confidence level based on obtaining an additional data stream from at least one additional type of sensor configured to monitor the activity of the user within the metaverse environment (Kim, Para. 0027, the confidence threshold may be any other suitable value greater than or less than 95%. The confidence threshold may be applied to determine whether the differences between the collective session data 128 and the user profile 118 are sufficiently minimal to authenticate the user as the first user 108) and (Kim, Para. 0059, the processor 138 of server 106 may then re-train the machine learning algorithm 126 with the received session data 128 from step 402 to update the user profile 118, wherein updating the user profile 118 improves information security by authenticating that the first user 108 is authorized to interact via the first avatar 112. Then, the method 300 proceeds to end). Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Soryal to incorporate the teachings of Cheng to include further comprising: computing the confidence level by comparing the motion data with a plurality of learned behaviors of the human user stored in a profile of the human user and by comparing the audio data with an audio profile of the human user (Kim, Para. 0004); and adjusting the confidence level based on obtaining an additional data stream from at least one additional type of sensor configured to monitor the activity of the user within the metaverse environment (Kim, Para. 0027). Doing so would aid to compare the user parameters of the session data to the user parameters of the stored user profile to satisfy a minimum percentage difference and to authorize the interaction in response to comparing the session data to the stored user profile if a confidence threshold is satisfied, wherein the confidence threshold is satisfied by a minimum number of user parameters of the session data satisfying the minimum percentage difference. The processor is further configured to train the machine learning algorithm with the received session data to update the user profile, wherein updating the user profile improves information security by authenticating that the first user is authorized to interact via the first avatar (Kim, Para. 0004).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GITA FARAMARZI whose telephone number is (571)272-0248. The examiner can normally be reached Monday- Friday 9:00 am- 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jorge L. Ortiz-Criado can be reached at (571)272-7624. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GITA FARAMARZI/Examiner, Art Unit 2496
/JORGE L ORTIZ CRIADO/Supervisory Patent Examiner, Art Unit 2496