DETAILED ACTION
This office action is in response to the Application No. 17397839 filed on
02/14/2026. Claims 1-20 are presented for examination and are currently pending. Applicant’s arguments have been carefully and respectfully considered.
Response to Arguments
On page 10 of the remarks, the Applicant argued that “As amended, claim 1 recites "[a] computer-implemented method of training a machine learning model" that is performed "at an electronic device including a processor, non-transitory memory, and a display." The computer-implemented method of training a machine learning model is thus directed to a process, which is one of the statutory categories of invention. As such, contrary to assertion on page 4 of the Non-Final Office Action, the computer-implemented method recited in claim 1 is not an abstract idea of a mental process”.
The Applicant’s argument is not persuasive because according to 2019 PEG which states that “the recitation of generic computer components in a claim does not preclude that claim from reciting an abstract idea. For instance, if a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it is still in the mental processes grouping unless the claim limitation cannot practically be performed in the mind”. Here, the electronic device that includes a processor, non-transitory memory, or a display are all generic computer component which do not preclude the components from being a mental process.
On pages 11-12 of the remarks, the Applicant argued that “In rejecting claim 1, the Non-Final Office Action on page 5 states that the "selecting" and "generating" steps are "Mental process directed to a user choosing a focus area of the data object (i.e., asset states)" and "Mental process directed to generating data for training by the user." Further, the Non-Final Office Action on pages 6-9 states that the "training" and "displaying" steps recite "a high level of generic computer software" and "high level recitation of generic computer component". Applicants respectfully submit that the method recited in amended claim 1 provides "[a] human-intuitive user interface. to train" a machine learning model, such as "a neural network model." See, paragraph [0019] of the specification. As amended, claim 1 recites the specifics of displaying the simulations of the behaviors of the asset in the human-intuitive user interface as shown in Figures 5A-5J. Amended claim 1 also recites the specifics of generating the training data for the machine learning model based on user inputs when a user interacts with the human-intuitive user interface”.
On page 12 of the remarks, the Applicant argued that “Amended claim 1 thus recites techniques that address the issues involved in traditional model training, e.g., "creation of training data which is manually classified or weighted by a user," as identified in paragraph [0003] of the specification. As in DDR Holdings and Enfish, the present claims are rooted in a specific computer technology of providing a human-intuitive interface including simulations in an extended reality (XR) environment for generating machine learning model training data and address a specific problem arising in the machine learning model training. Additionally, providing a human-intuitive user interface by simulating the behaviors of assets in an XR environment defined by a data object, where the assets are included in the data object and have asset states generated by a machine learning model, is inherently a technological process. Also included in the technological process are selecting by the processor a training focus based on desired behaviors indicated in a user input, and generating by the processor a set of training data to train the machine learning model, so that the data object is updated accordingly and the machine learning model generates updated asset states to be reflected by the updated simulated behaviors of the asset in the XR environment. Accordingly, the recited process as a whole cannot be performed in the mind”.
The Applicant’s arguments are not persuasive because the creation of training data which is manually classified or weighted by a user, providing a human-intuitive interface including simulations in an extended reality (XR) environment for generating machine learning model training data and the assets which are included in the data object that have asset states generated by a machine learning model are all additional elements as analyzed in the 101 rejection of this office action. Since the claims include abstract idea (mental steps), then for the claims to be eligible, the additional elements needs to demonstrate that the claim as a whole integrates the abstract ideas into practical application or significantly more than the abstract ideas. See MPEP 2106.04, 2106.05.
The Applicant needs to show that the additional elements included in the claims
integrates the abstract ideas into practical application or significantly more than an
abstract idea. The added limitation of “the creation of training data which is manually classified or weighted by a user, providing a human-intuitive interface including simulations in an extended reality (XR) environment for generating machine learning model training data and the assets which are included in the data object that have asset states generated by a machine learning model” are all directed to additional elements. These additional elements does not integrate the abstract ideas into practical
application or significantly more than an abstract idea. Therefore, the claims are not
eligible.
In addition, the Applicant stated that the invention addresses some specific problem without stating what problem the invention solves.
Further on pages 12-13 of the remarks, the Applicant argued that “In summary, the training method in amended claim 1 is not an abstract idea and the specific steps recite in amended claim 1 integrate with a practical application of a human-intuitive model training method. While the remarks provided herein focus on independent claim 1, Applicants respectfully submit that the remarks are equally applicable to independent claims 17 and 20, and all of the claims dependent thereon. At least in view of the foregoing, Applicants respectfully request reconsideration and withdrawal of the rejections of claims 1-20 under 35 U.S.C. §101”.
The Applicant’s argument are not persuasive because the Applicant needs to argue how the abstract ideas enumerated in the 101 rejection including the highlighted additional elements improves the functioning of the computer or the technological field. As a result, the claimed invention do not integrate the abstract ideas into practical application or significantly more than a judicial exception.
Furthermore, the 101 rejection is maintained and adjusted to reflect the newly added limitations.
The Examiner is withdrawing the prior art rejections in the previous Office action because Applicant’s amendment necessitated the new grounds of rejection presented in this Office action. As a result, the Applicant’s argument are moot.
However, some of the references in the previous office action have been applied
to the claims.
The Examiner notes that independent claims 17 and 20 are similar to claim 1.
The same rationale applies to claims 17 and 20.
The Examiner notes dependent claims 2-16, 18 and 19 which depend directly or indirectly from claims 17 and 20 are not allowable because the Applicant’s argument
have been considered but are moot for similar reasons argued above regarding claim 1.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3. Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed towards an abstract idea without significantly more.
Step 1
Independent claim 1 is directed to a method, and falls into one of the four statutory categories.
Step 2A, Prong 1
Claim 1 recites the following abstract ideas:
selecting, based on the desired behaviors, a training focus indicating one or more of the plurality of asset states associated with the simulating the desired behaviors in the XR environment (Mental process directed choosing a focus area of the data objects (i.e., asset states). This can be performed by using observing the focus area and making a judgement on choosing the training focus based on the states of the asset.);
wherein corresponding simulations of corresponding behaviors of the virtual object in which the training focus occurs in the XR environment are assigned weights according to the user input (Mental process directed to assigning weights to simulations that corresponds to behavior of the asset. The process can be performed by observing the simulations of corresponding behaviors by the user and making a judgement on the weight the training focus is assigned);
generating a set of training data including a plurality of training instances simulating the desired behaviors of the asset and weighted according to the training focus (Mental process directed to generating data for training by the user prioritizing (i.e., training instances weighted) what areas of data object to choose from. This process can be done by observing the simulated behaviors and making a judgement on assigning the weight to the simulated behaviors); and
Step 2A, Prong 2
Claim 1 recites the following additional elements:
at an electronic device including a processor and non-transitory memory (This limitation is directed to a computer component. This is directed to high level recitation of generic computer component and does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)):
by the processor (This limitation is directed to a computer component. This is directed to high level recitation of generic computer component and does not integrate the abstract idea into a practical application. See MPEP 2106.05(f))
and a display (This limitation is directed to a generic computer component. This is directed to high level recitation of generic computer output component and does not integrate the abstract idea into a practical application, See MPEP 2106.05(f)):
displaying, on the display, an extended reality (XR) environment including simulations of behaviors of an asset wherein the XR environment is defined by a data object provided to with the machine learning model and (This limitation is directed to a computer output component that displays a simulation of events (i.e., plurality of asset states) on the screen. This is directed to high level recitation of generic computer component and does not integrate the abstract idea into a practical application. See MPEP 2106.05(f));
and wherein the asset is included in the data object and has a plurality of asset states generated by the machine learning model indicating the simulations of the behaviors in the XR environment (This limitation is directed to a particular type or source of data, which is field of use and it does not integrate the abstract idea into a practical application. See MPEP 2106.05(h))
while displaying the simulations of the behaviors in the XR environment, receiving a user input (limitation is directed to insignificant extra solution activity of data transmission. This limitation does not integrate the abstract idea into a practical application. See MPEP 2106.05(g))
indicative of a training the machine learning model and desired behaviors of the asset (This limitation is directed to training of a model using input data. This is recited at a high level of generic computer software using data to train a model. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)),
training the machine learning model on the set of training data by providing the data the data object and the weighted plurality of training instances to the machine learning model (This limitation is directed to training of a machine learning model using training data (i.e., weighted plurality of training instances). This is recited at a high level of generic computer software using data to train a model. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f));
displaying, on the display, updated simulated behaviors of the asset in the XR environment reflecting a next plurality of asset states (limitation is directed to insignificant extra solution activity of data transmission. This limitation does not integrate the abstract idea into a practical application. See MPEP 2106.05(g)),
wherein the next plurality of asset states is generated by the trained machine learning model (limitation is directed to insignificant extra solution activity of data transmission of outputting data. This limitation does not integrate the abstract idea into a practical application. See MPEP 2106.05(g)).
Step 2B
Claim 1 recites the following additional elements:
at an electronic device including a processor and non-transitory memory (This limitation is directed to a computer component. This is directed to high level recitation of generic computer component and does not amount to significantly more than judicial exception. See MPEP 2106.05(f)):
by the processor (This limitation is directed to a computer component. This is directed to high level recitation of generic computer component and does not amount to significantly more than judicial exception. See MPEP 2106.05(f))
and a display (This limitation is directed to a generic computer component. This is directed to high level recitation of generic computer output component and does not amount to significantly more than judicial exception, See MPEP 2106.05(f)):
displaying, on the display, an extended reality (XR) environment including simulations of behaviors of an asset wherein the XR environment is defined by a data object provided to with the machine learning model and (This limitation is directed to a computer output component that displays a picture of events (i.e., plurality of asset states) on the screen. This is directed to high level recitation of generic computer component and does not amount to significantly more than judicial exception. See MPEP 2106.05(f));
and wherein the asset is included in the data object and has a plurality of asset states generated by the machine learning model indicating the simulations of the behaviors in the XR environment (This limitation is directed to a particular type or source of data, which is field of use and it does not amount to significantly more than judicial exception. See MPEP 2106.05(h))
while displaying the simulations of the behaviors in the XR environment, receiving a user input (This limitation is directed insignificant extra solution activity of data transmission and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i)
indicative of a training the machine learning model and desired behaviors of the asset (This limitation is directed to training of a model using input data. This is recited at a high level of generic computer software using data to train a model. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)),
training the machine learning model on the set of training data by providing the data the data object and the weighted plurality of training instances to the machine learning model (This limitation is directed to training of a machine learning model using training data (i.e., weighted plurality of training instances). This is recited at a high level of generic computer software using data to train a model. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f));
displaying, on the display, updated simulated behaviors of the asset in the XR environment reflecting a next plurality of asset states (limitation is directed to insignificant extra solution activity of data transmission and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i),
wherein the next plurality of asset states is generated by the trained machine learning model (limitation is directed to insignificant extra solution activity of data transmission of outputting data and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i).
4. Dependent claim 2 is directed to a method, and falls into one of the four statutory categories.
Claim 2 do not recite any abstract ideas.
Claim 2 recites the following additional elements:
wherein the user input includes speech (This limitation is directed to a particular type or source of data, which is field of use and it does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)).
Claim 2 recites the following additional elements:
wherein the user input includes speech (This limitation is directed to a particular type or source of data, which is field of use and it does not amount to significantly more than judicial exception. See MPEP 2106.05(h)).
5. Dependent claim 3 is directed to a method, and falls into one of the four statutory categories.
Claim 3 recites the following abstract ideas:
converting the speech to a text representation of the speech (Menta l process directed to converting speech to text. This can be done with the use of pen and paper);
parsing the text representation of the speech with a natural language parsing algorithm to identify one or more of the plurality of asset states (Mental process directed to using an algorithm to identify plurality of assets.); and
selecting the training focus based on the identified the one or more of the plurality of asset states (Mental process directed to selecting based on identified assets. This selection can be done with a pen and paper).
Claim 3 do not recite any additional elements.
6. Dependent claim 4 is directed to a method, and falls into one of the four statutory categories.
Claim 4 do not recite any abstract ideas.
Claim 4 recites the following additional elements:
wherein the user input indicates a video (This limitation is directed to a particular type or source of data, which is field of use and it does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)).
Claim 4 recites the following additional elements:
wherein the user input indicates a video (This limitation is directed to a particular type or source of data, which is field of use and it does not amount to significantly more than judicial exception. See MPEP 2106.05(h)).
7. Dependent claim 5 is directed to a method, and falls into one of the four statutory categories.
Claim 5 recites the following abstract ideas:
performing video analysis on the video to identify the one or more of the plurality of asset states (Mental process directed to analyzing a video to identify assets. This is a process that is performed in the mind); and
selecting the training focus based on the identified one or more of the plurality of asset states (Mental process directed to selecting training focus based on the identified assets. This process can be done with pen and paper.).
Claim 5 do not recite any additional elements.
8. Dependent claim 6 is directed to a method, and falls into one of the four statutory categories.
Claim 6 recites the following abstract ideas:
determining a plurality of candidate training focuses, each indicating a different set of one or more of the plurality of asset states (Mental process directed to determining the training focus that indicates plurality of assets. This process can be performed in the mind); and
selecting one of the plurality of candidate training focuses as the training focus (Mental process directed to selecting training focuses. This process can be performed in the mind).
Claim 6 do not recite any additional elements.
9. Dependent claim 7 is directed to a method, and falls into one of the four statutory categories.
Claim 7 do not recite any abstract ideas.
Claim 7 recites the following additional elements:
wherein at least one of the plurality of candidate training focuses indicates a single one of the plurality of asset states (This limitation is directed to a particular type or source of data, which is field of use and it does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)).
Claim 7 recites the following additional elements:
wherein at least one of the plurality of candidate training focuses indicates a single one of the plurality of asset states (This limitation is directed to a particular type or source of data, which is field of use and it does not amount to significantly more than judicial exception. See MPEP 2106.05(h)).
10. Dependent claim 8 is directed to a method, and falls into one of the four statutory categories.
Claim 8 do not recite any abstract ideas.
Claim 8 recites the following additional elements:
wherein at least one of the plurality of candidate training focuses indicates a function of two or more of the plurality of asset states (This limitation is directed to a particular type or source of data, which is field of use and it does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)).
Claim 8 recites the following additional elements:
wherein at least one of the plurality of candidate training focuses indicates a function of two or more of the plurality of asset states (This limitation is directed to a particular type or source of data, which is field of use and it does not amount to significantly more than judicial exception. See MPEP 2106.05(h)).
11. Dependent claim 9 is directed to a method, and falls into one of the four statutory categories.
Claim 9 recites the following abstract ideas:
ranking the plurality of candidate training focuses (Mental process directed to ranking the training focuses. This process can be performed with pen and paper); and
selecting one of the candidate training focuses as the training focus based on the ranking (Mental process directed to choosing the training focuses based on the ranking. This process can be performed with pen and paper).
Claim 9 do not recite any additional elements.
12. Dependent claim 10 is directed to a method, and falls into one of the four statutory categories.
Claim 10 recites the following abstract ideas:
wherein ranking the plurality of candidate training focuses is based on asset state recency (Mental process directed to ranking training focuses based on the recent state of the asset. This process can be performed with pen and paper).
Claim 10 do not recite any additional elements.
13. Dependent claim 11 is directed to a method, and falls into one of the four statutory categories.
Claim 11 recite the following abstract ideas:
wherein ranking the plurality of candidate training focuses is based on the user input (Mental process directed to ranking training focuses based on the input of the user. This process can be performed in the mind).
Claim 11 do not recite any additional elements.
14. Dependent claim 12 is directed to a method, and falls into one of the four statutory categories.
Claim 12 recites the following abstract ideas:
selecting a potential training focus indicating the one or more of the plurality of asset states (Mental process directed to selecting a potential focus that indicates asset states. This can be done with a pen and paper); and
Claim 12 recites the following additional elements:
presenting a natural language confirmation of the potential training focus (This limitation is directed to outputting a confirmation of the potential training focus. This mere data gathering to apply an exception. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(g)).
Claim 12 recites the following additional elements:
presenting a natural language confirmation of the potential training focus (This limitation is directed to outputting a confirmation of the potential training focus. This mere data gathering to apply an exception. This does not amount to significantly more than judicial exception. See MPEP 2106.05(g)).
15. Dependent claim 13 is directed to a method, and falls into one of the four statutory categories.
Claim 13 recites the following abstract ideas:
selecting the potential training focus as the training focus (Mental process directed to a user choosing a potential training focus. The choosing process can be performed mind).
Claim 13 recites the following additional elements:
receiving a user input confirming the potential training focus (This limitation is directed to mere data gathering. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(g)) and
Claim 13 recites the following additional elements:
receiving a user input confirming the potential training focus (This limitation is directed to mere data gathering. This does not amount to significantly more than judicial exception. See MPEP 2106.05(g))
16. Dependent claim 14 is directed to a method, and falls into one of the four statutory categories.
Claim 14 recites the following abstract ideas:
selecting the modified potential training focus as the training focus (Mental process directed to a user choosing a modified potential training focus. The choosing process can be performed mind).
Claim 14 recites the following additional elements:
receiving a user input modifying the potential training focus (This limitation is directed to mere data gathering. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(g)) and
Claim 14 recites the following additional elements:
receiving a user input modifying the potential training focus (This limitation is directed to mere data gathering. This does not amount to significantly more than judicial exception. See MPEP 2106.05(g))
17. Dependent claim 15 is directed to a method, and falls into one of the four statutory categories.
Claim 15 recites the following abstract ideas:
selecting a different potential training focus as the training focus (Mental process directed to a user choosing a potential training focus. The choosing process can be performed mind).
Claim 15 recites the following additional elements:
receiving a user input negating the potential training focus (This limitation is directed to insignificant extra solution activity of data transmission. This limitation does not integrate the abstract idea into a practical application. See MPEP 2106.05(g)) and
Claim 15 recites the following additional elements:
receiving a user input negating the potential training focus (This limitation is directed insignificant extra solution activity of data transmission and it is well understood routine and conventional. This does not amount to significantly more than judicial exception. See MPEP 2106.05(d)(II), example i) and
18. Dependent claim 16 is directed to a method, and falls into one of the four statutory categories.
Claim 16 do not recite any abstract ideas.
Claim 16 recites the following additional elements:
wherein the machine learning model includes a neural network model (This limitation is directed to the use of a neural network model. This is recited in high-level generic computer software. This does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)).
Claim 16 recites the following additional elements:
wherein the machine learning model includes a neural network model (This limitation is directed to the use of a neural network model. This is recited in high-level generic computer software. This does not amount to significantly more than judicial exception. See MPEP 2106.05(f)).
19. Independent claim 17 is directed to a device, and falls into one of the four statutory categories.
With regards to claim 17, it is substantially similar to claim 1, and is rejected in
the same manner and reasoning applying.
20. Dependent claim 18 is directed to a method, and falls into one of the four statutory categories.
With regards to claim 18, it is substantially similar to claim 3, and is rejected in
the same manner and reasoning applying. Further,
Claim 18 recites the following additional elements:
the one or more processor (This limitation is directed to a computer component. This is directed to high level recitation of generic computer component and does not integrate the abstract idea into a practical application. See MPEP 2106.05(f))
Claim 18 recites the following additional elements:
the one or more processor (This limitation is directed to a computer component. This is directed to high level recitation of generic computer component and does not amount to significantly more than judicial exception. See MPEP 2106.05(f))
21. Dependent claim 19 is directed to a method, and falls into one of the four statutory categories.
With regards to claim 19, it is substantially similar to claim 6, and is rejected in
the same manner and reasoning applying. Further,
Claim 19 recites the following additional elements:
the one or more processor (This limitation is directed to a computer component. This is directed to high level recitation of generic computer component and does not integrate the abstract idea into a practical application. See MPEP 2106.05(f))
Claim 19 recites the following additional elements:
the one or more processor (This limitation is directed to a computer component. This is directed to high level recitation of generic computer component and does not amount to significantly more than judicial exception. See MPEP 2106.05(f))
22. Independent claim 20 is directed to a device, and falls into one of the four statutory categories.
With regards to claim 20, it is substantially similar to claim 1, and is rejected in
the same manner and reasoning applying. Further,
Claim 20 recites the following additional elements:
a non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with a display, cause the device to (This limitation is directed to mere instruction to apply a judicial exception. This is directed to high level recitation of generic computer component and does not integrate the abstract idea into a practical application. See MPEP 2106.05(f)):
Claim 20 recites the following additional elements:
the one or more processor (This limitation is directed to mere instruction to apply a judicial exception. This is directed to high level recitation of generic computer component and does not amount to significantly more than judicial exception. See MPEP 2106.05(f))
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
23. Claims 1, 17 and 20 are rejected under 35 U.S.C 103 as being unpatentable over Smith et al. (US20180232921) in view of Sowden et al. (US20200241716 filed 01/25/2019)
Regarding claim 1, Smith teaches a computer-implemented method (The computing device 102 is illustrated as including the experience interaction module 116 that is implemented at least partially in hardware of the computing device 102 [0036]) of training a machine learning model (The profile generation module 402, for example, may employ machine learning techniques such as neural networks (e.g., convolutional, deep learning, regression) to learn a model to describe how interaction occurs with virtual objects within a virtual or augmented reality environment [0050]; The Examiner notes “to learn” indicates training is taking place because learning is the intended goal of training, and learning includes the step of training) comprising:
at an electronic device including a processor, non-transitory memory, and a display (The computing device 102 includes a housing 204, one or more sensors 206, and an output device 208, e.g., display device [0037]; A computing device, for instance, may be configured as … a mobile device (e.g., assuming a handheld configuration such as a tablet or mobile phone), worn by a user as goggles or other eyewear, and so forth. Thus, a computing device may range from full resource devices with substantial memory and processor resources (e.g., personal computers, game consoles) to a low-resource device with limited memory and/or processing resources [0029]):
displaying, on the display, an extended reality (XR) environment (computing device 102 of FIG. 1 in greater detail. The illustrated environment 100 includes the computing device 102 of FIG. 1 as configured for use in augmented reality and/or virtual reality scenarios [0035]; The output device 208 is also configurable in a variety of ways to support a virtual or augmented reality environment through visual, audio, and even tactile outputs [0039]) including simulations of behaviors of an asset (Additionally, other virtual objects 112 configured as virtual human entities may also “go along” with the tour may ask questions [0064]; In a further example, different levels of output supported by the virtual objects is modeled (block 508) by a level of output modeling model 410 [0053]. The Examiner notes that modeling is simulation, virtual human entity is an asset, and asking questions is a behavior of an asset),
wherein the XR environment is defined by a data object (provision of digital experience content 110 and associated virtual objects 112 [0030]; “Digital experience content” is used by a computing device to define an immersive environment as part of a virtual or augmented reality environment [0023]) provided to the machine learning model (process the digital experience content 110 using machine learning [0056], Fig. 4), and
wherein the asset is included in the data object and has a plurality of asset states (In another example, the behavior of virtual human entities may also be changed, e.g., from rowdy screaming fans jumping up and down to a more subdued experience [0063]. The Examiner notes a virtual human entity is an asset, motion of jumping up is an asset state and jumping down is another asset state) generated by the machine learning model indicating the simulations of the behaviors in the XR environment (The experience generation module 414 then employs the user profile 120 to process the digital experience content 110 using machine learning to select and configure virtual objects for inclusion as part of the digital experience content 110 [0056]); while displaying the simulations of the behaviors in the XR environment (The user 210, for instance, may be exposed to virtual objects 112 that are not “really there” (e.g., virtual bricks) and are displayed for viewing by the user in an environment that also is completely computer generated [0041]),
receiving a user input indicative of training the machine learning model (The types of user interaction how the user 210 may provide inputs and interact with the virtual objects 108 [0051]) and desired behaviors of the asset (Example of types of user interaction include manual manipulation (e.g., virtual handling of the virtual objects 108, typing) [0051]);
selecting by the processor (Thus, a computing device may range from full resource devices with substantial memory and processor resources [0029]), based on the desired behaviors (One user, for instance, may “walk” in and enjoy the choir singing, while another may desire a completely empty cathedral to browse through with a virtual brochure “in their hand” at their own pace [0062]),
a training focus indicating one or more of the plurality of asset states associated with simulating the desired behaviors in the XR environment (A virtual tourism application executed by a computing device 102, for example, through use of the techniques described herein may learn preferences of these users regarding “how” the different users choose the interact with the environment. One user, for instance, may “walk” in and enjoy the choir singing [0062]. The Examiner notes preferences of these users is a selected training focus and motion of “walk” is an asset),
generating by the processor, a set of training data including a plurality of training instances (The user profile, for instance, may be generated through machine learning by a computing device to describe user interaction with digital experience content, i.e., content used to define an augmented or virtual reality environment [0021]) simulating the desired behaviors of the asset (In an audio example virtual objects may be output as an audio notification (e.g., via a virtual loudspeaker system), as part of an “overheard” conversation by virtual human entities within the environment, and so forth. Thus, modeling of the different types of output may give insight into desires in how the user desires to receive information within the environment [0054]) and displaying, on the display (The output device 208 is also configurable in a variety of ways to support a virtual or augmented reality environment through visual, audio, and even tactile outputs [0039]),
updated simulated behaviors of the asset in the XR environment reflecting a next plurality of asset states (If a user selects to go to a cathedral within a virtual tourism application, for instance, the next virtual recommendation may be other cathedrals or castles or buildings from a similar timeframe [0073]),
wherein the next plurality of asset states is generated by the trained machine learning model (the experience recommendation module 124 updates the user profile 120 so that recommendations 708 are generated with increased accuracy [0073]; the experience generation module 710 includes an experience recommendation module 714 that is configured to generate the recommendation 708 based on the user profile 120 (i.e., the machine-learned model of user interaction) [0071]).
Smith does not explicitly teach wherein corresponding simulations of corresponding behaviors of the virtual object in which the training focus occurs in the XR environment are assigned weights according to the user input; and weighted according to the training focus; training by the processor, the machine learning model on the set of training data by providing the data object and the weighted plurality of training instances to the machine learning model;
Sowden teaches wherein corresponding simulations of corresponding behaviors of the virtual object (In some implementations, a “user” can include one or more programs or virtual entities, as well as persons that interface with the system or network [0042]) in which the training focus occurs in the XR environment are assigned weights according to the user input (In some implementations, the level of movement of the subject may be assigned different weights based on the type of subject [0099]; In other words, a high motion score is indicative for a variation across frames of a motion image that is perceived by the user as large or significant such the motion aspect of the image increases and enhances, when displayed, user experience, perception and information obtained by the user by the motion aspect [0100])
generating by the processor, a set of training data including a plurality of training instances simulating the behaviors of the asset and weighted according to the training focus (Prior to the training, … A training dataset of motion images may be obtained [0114]; For example, the motion score may be a weighted combination of the level of stability and the level of movement of the subject in the motion [0099])
training by the processor, the machine learning model on the set of training data by providing the data object and the weighted plurality of training instances to the machine learning model (A training dataset of motion images may be obtained and provided as input to the neural network. For example, the training dataset may include a plurality of motion images and associated motion scores or labels [0114])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Sowden for the benefit of an improved display of motion images with reduced computational load or reduced storage cost (Sowden [0122])
Regarding claim 17, claim 17 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Smith teaches a device comprising: a non-transitory memory, a display and one or more processors to: (a computing device may range from full resource devices with substantial memory and processor resources (e.g., personal computers, game consoles) [0029]; The computing device 102 includes a housing 204, one or more sensors 206, and an output device 208, e.g., display device [0037])
Regarding claim 20, claim 20 is similar to claim 1. It is rejected in the same manner and reasoning applying. Further, Smith teaches a non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with a display, cause the device to (The computer-readable media may include a variety of media that may be accessed by the computing device 902 [0083]; “Computer-readable storage media” may refer to media and/or devices that enable persistent and/or non-transitory storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Thus, computer-readable storage media refers to non-signal bearing media. The computer-readable storage media includes hardware such as volatile and non-volatile, removable and non-removable media and/or storage devices implemented in a method [0084]):
24. Claims 2, 4, 5, 16 are rejected under 35 U.S.C 103 as being unpatentable over Smith et al. (US20180232921) in view of Sowden et al. (US20200241716 filed 01/25/2019) in view of Hwang et al. (US20200005539 filed 06/27/2018)
Regarding claim 2, Modified Smith teaches the method of claim 1, Hwang teaches wherein the user input includes speech (The speech recognition module 370 of the NED system 300 may transcribe the speech … a tone or pitch of the user's speech [0093]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Regarding claim 4, Modified Smith teaches the method of claim 1, Hwang teaches wherein the user input indicates a video (In some embodiments, the NED 305 may augment views of a physical, real-world environment with computer-generated elements (e.g., … video [0027])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Regarding claim 5, Modified Smith teaches the method of claim 4, Hwang teaches wherein selecting the training focus includes: performing video analysis on the video to identify the one or more of the plurality of asset states (In one embodiment, more than one imaging device 315 is used to capture images of the user's hands. As described in further detail below, the captured images of the user's hands may be used to identify various gestures for the user [0038]); and
selecting the training focus based on the identified one or more of the plurality of asset states (Each virtual flair corresponding to a particular gesture may be selected in order to emphasize the gesture, provide additional context to the gesture, and/or otherwise add visual excitement to the AR environment [0097]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Regarding claim 16, Modified Smith teaches the method of claim 1, Hwang teaches wherein the machine learning model includes a neural network model (the tracking module 360 includes a machine learning model (e.g., a convolutional neural network) [0057]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
25. Claims 3, 6-13, 15, 18 and 19 are rejected under 35 U.S.C 103 as being unpatentable over Smith et al. (US20180232921) in view of Hwang et al. (US20200005539 filed 06/27/2018) in view of Sowden et al. (US20200241716 filed 01/25/2019) and further in view of Mixter et al. (US20200342223 PCT filed 05/04/2018)
Regarding claim 3, Modified Smith teaches the method of claim 2, Hwang wherein selecting the training focus includes: converting the speech to a text representation of the speech (the NED system 300 may display a transcription of one or more words spoken by the individual … such that the transcription text is displayed (e.g., one letter or word at a time) [0093]);
parsing the text representation of the speech (The speech recognition module 370 uses one or more audio transcription algorithms to parse to received audio data and transcribe a transcription of the detected speech [0058])
to identify the one or more of the plurality of asset states (In some embodiments, a gesture comprises a sequence of multiple motions of the user's hand 440. In some embodiments, a gesture also corresponds to a particular position or orientation of the user's hand [0076]; the controller 310 is configured to be able to identify different types of gestures [0076]); and
selecting the training focus based on the identified one or more of the plurality of asset states (Each virtual flair corresponding to a particular gesture may be selected in order to emphasize the gesture, provide additional context to the gesture, and/or otherwise add visual excitement to the AR environment [0097]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Modified Smith does not explicitly teach parsing the text representation of the speech with a natural language parsing algorithm
Mixter teaches parsing the text representation of the speech with a natural language parsing algorithm to identify the one or more of the plurality of asset states (speech capture module 112 may be further configured to convert that captured audio to text and/or to other representations or embeddings, e.g., using speech-to-text (“STT”) processing techniques [0052]; Also, for example, in some implementations the natural language processor 133 may additionally and/or alternatively include a dependency parser (not depicted) configured to determine syntactic relationships between terms in natural language input [0058]);and
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Mixter for the benefit of selectively analyzing vision frames, e.g., by a gaze and mouth module 116 of adaptation engine 115, to monitor for occurrence of: mouth movement of a user (Mixter [0042])
Regarding claim 6, Modified Smith teaches the method of claim 1, Modified Smith does not explicitly teach wherein selecting the training focus includes: determining a plurality of candidate training focuses, each indicating a different set of one or more of the plurality of asset states; and selecting one of the plurality of candidate training focuses as the training focus.
Mixter teaches wherein selecting the training focus includes: determining a plurality of candidate training focuses, each indicating a different set of one or more of the plurality of asset states (a gesture (e.g., “hand wave”, “thumbs up”, “high five”) of the user that co-occurs with, or is in temporal proximity to, the detected mouth movement [0005]; that a gesture (e.g., any of one or more candidate invocation gestures) of the user co-occurred with the mouth movement [0048]; the mouth module 116A determines mouth movement only when mouth movement is detected with at least a threshold probability and/or for at least a threshold duration [0085]) and
selecting one of the plurality of candidate training focuses as the training focus (As yet another example, transmission of the data can additionally or alternatively be further based on determining, by the other conditions module 118 based on vision data, that a gesture (e.g., any of one or more candidate invocation gestures) of the user co-occurred with the mouth movement and/or directed gaze of the user [0048]. The Examiner notes the transmission of any of one or more candidate invocation gestures are selected).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Mixter for the benefit of selectively analyzing vision frames, e.g., by a gaze and mouth module 116 of adaptation engine 115, to monitor for occurrence of: mouth movement of a user (Mixter [0042])
Regarding claim 7, Modified Smith teaches the method of claim 6, Modified Smith does not explicitly teach wherein at least one of the plurality of candidate training focuses indicates a single one of the plurality of asset states.
Mixter teaches wherein at least one of the plurality of candidate training focuses indicates a single one of the plurality of asset states (a gesture (e.g., “hand wave”, “thumbs up”, “high five”) of the user that co-occurs with, or is in temporal proximity to, the detected mouth movement [0005]; that a gesture (e.g., any of one or more candidate invocation gestures) of the user co-occurred with the mouth movement [0048]).
The same motivation to combine dependent claim 6 applies here.
Regarding claim 8, Modified Smith teaches the method of claim 6, Modified Smith does not explicitly teach wherein at least one of the plurality of candidate training focuses indicates a function of two or more of the plurality of asset states.
Mixter teaches wherein at least one of the plurality of candidate training focuses indicates a function of two or more of the plurality of asset states (that a gesture (e.g., any of one or more candidate invocation gestures) of the user co-occurred with the mouth movement and/or directed gaze of the user [0048]. The Examiner notes gesture as a movement and mouth movement indicates a function of two of the plurality of asset states).
The same motivation to combine dependent claim 6 applies here.
Regarding claim 9, Modified Smith teaches the method of claim 6, Modified Smith does not explicitly teach wherein selecting one of the plurality of candidate training focuses as the training focus includes: ranking the plurality of candidate training focuses; and selecting one of the candidate training focuses as the training focus based on the ranking.
Mixter teaches wherein selecting one of the plurality of candidate training focuses as the training focus includes: ranking the plurality of candidate training focuses (If the image frames are processed to generate probabilities of [0.75, 0.85, 0.5, 0.7, 0.9], mouth movement can be detected since 80% of the frames indicated mouth movement with a probability that is greater than 0.7 [0085]); and
selecting one of the candidate training focuses as the training focus based on the ranking (The mouth movement module can determine there is mouth movement only if at least X % of a sequence of image frames (that corresponds to the threshold duration) has a corresponding probability that satisfies a threshold. For instance, assume X % is 60%, the probability threshold is 0.6, and the threshold duration is 0.25 seconds [0085]. The Examiner notes the selection is made once the threshold is satisfied).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Mixter for the benefit of selectively analyzing vision frames, e.g., by a gaze and mouth module 116 of adaptation engine 115, to monitor for occurrence of: mouth movement of a user (Mixter [0042])
Regarding claim 10, Modified Smith teaches the method of claim 9, Modified Smith does not explicitly teach wherein ranking the plurality of candidate training focuses is based on asset state recency.
Mixter teaches wherein ranking the plurality of candidate training focuses is based on asset state recency (However, in response to detection of mouth movement and the directed gaze, such processing can be adapted by causing transmission of audio data and/or vision data (e.g., recently buffered data and/or data received after the detection) to the cloud-based automated assistant component(s) 130 for further processing [0044]).
The same motivation to combine dependent claim 3 applies here.
Regarding claim 11, Modified Smith teaches the method of claim 9, Modified Smith does not explicitly teach wherein ranking the plurality of candidate training focuses is based on the user input.
Mixter teaches wherein ranking the plurality of candidate training focuses is based on the user input (implement the automated assistant (e.g., remote server device(s) that process user inputs and generate appropriate responses) [0012]; As used herein, free-form input is input that is formulated by a user and that is not constrained to a group of options presented for selection by the user [0056]; If the image frames are processed to generate probabilities of [0.75, 0.85, 0.5, 0.7, 0.9], mouth movement can be detected since 80% of the frames indicated mouth movement with a probability that is greater than 0.7 [0085]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Mixter for the benefit of selectively analyzing vision frames, e.g., by a gaze and mouth module 116 of adaptation engine 115, to monitor for occurrence of: mouth movement of a user (Mixter [0042])
Regarding claim 12, Modified Hwang Smith teaches the method of claim 1, Hwang teaches wherein selecting the training focus includes: selecting a potential training focus indicating the one or more of the plurality of asset states (the gesture tracking system of the NED system may identify that an individual in the local area has performed a specific type of gesture such as slapping a surface of a table [0015]); and
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Modified Smith does not explicitly teach presenting a natural language confirmation of the potential training focus.
Mixter teaches presenting a natural language confirmation of the potential training focus (In some implementations, natural language processor 133 and intent matcher 134 may collectively form the aforementioned intent understanding module 135 [0062]; For example, one grammar, “play <artist>”, may be mapped to an intent that invokes a responsive action that causes music by the <artist> to be played on the client device 106 operated by the user. Another grammar, “[weather I forecast] today,” may be match-able to user queries such as “what's the weather today” and “what's the forecast for today?” [0063]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Mixter for the benefit of selectively analyzing vision frames, e.g., by a gaze and mouth module 116 of adaptation engine 115, to monitor for occurrence of: mouth movement of a user (Mixter [0042])
Regarding claim 13, Modified Smith teaches the method of claim 12, Hwang teaches wherein selecting the training focus further includes (Each virtual flair corresponding to a particular gesture may be selected in order to emphasize the gesture, provide additional context to the gesture, and/or otherwise add visual excitement to the AR environment [0097])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Modified Smith does not explicitly teach receiving a user input confirming the potential training focus and selecting the potential training focus as the training focus.
Mixter teaches receiving a user input confirming the potential training focus and selecting the potential training focus as the training focus (In some implementations, STT module 132 may generate a plurality of candidate textual interpretations of the user's utterance, and utilize one or more techniques to select a given interpretation from the candidates [0054]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Mixter for the benefit of selectively analyzing vision frames, e.g., by a gaze and mouth module 116 of adaptation engine 115, to monitor for occurrence of: mouth movement of a user (Mixter [0042])
Regarding claim 15, Modified Smith teaches the method of claim 12, Hwang teaches wherein selecting the training focus further includes (Each virtual flair corresponding to a particular gesture may be selected in order to emphasize the gesture, provide additional context to the gesture, and/or otherwise add visual excitement to the AR environment [0097])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Mixter teaches receiving a user input negating the potential training focus (For example, a television captured in video(s)/image(s) can be ignored to prevent false detections as a result of a person rendered by the television (e.g., a weatherperson) [0017]) and
selecting a different potential training focus as the training focus (Also, in various implementations, once a TV, picture frame, etc. location is detected, it can optionally continue to be ignored over multiple frames (e.g., while verifying intermittently, until movement of client device or object(s) is detected, etc.) [0017]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Hwang to incorporate the teachings of Mixter for the benefit of selectively analyzing vision frames, e.g., by a gaze and mouth module 116 of adaptation engine 115, to monitor for occurrence of: mouth movement of a user (Mixter [0042])
Regarding claim 18, Modified Smith teaches the device of claim 17, Hwang teaches wherein the user input includes speech (The speech recognition module 370 of the NED system 300 may transcribe the speech … a tone or pitch of the user's speech [0093]) and
the one or more processors are to select the training focus by: converting the speech to a text representation of the speech (the NED system 300 may display a transcription of one or more words spoken by the individual … such that the transcription text is displayed (e.g., one letter or word at a time) [0093]);
parsing the text representation of the speech (The speech recognition module 370 uses one or more audio transcription algorithms to parse to received audio data and transcribe a transcription of the detected speech [0058])
to identify one or more of the plurality of asset states (In some embodiments, a gesture comprises a sequence of multiple motions of the user's hand 440. In some embodiments, a gesture also corresponds to a particular position or orientation of the user's hand [0076]; the controller 310 is configured to be able to identify different types of gestures [0076]); and
selecting the training focus based on the identified one or more of the plurality of asset states Each virtual flair corresponding to a particular gesture may be selected in order to emphasize the gesture, provide additional context to the gesture, and/or otherwise add visual excitement to the AR environment [0097].
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Modified Smith does not explicitly teach parsing the text representation of the speech with a natural language parsing algorithm
Mixter teaches parsing the text representation of the speech with a natural language parsing algorithm to identify the one or more of the plurality of asset states (speech capture module 112 may be further configured to convert that captured audio to text and/or to other representations or embeddings, e.g., using speech-to-text (“STT”) processing techniques [0052]; Also, for example, in some implementations the natural language processor 133 may additionally and/or alternatively include a dependency parser (not depicted) configured to determine syntactic relationships between terms in natural language input [0058]);and
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Mixter for the benefit of selectively analyzing vision frames, e.g., by a gaze and mouth module 116 of adaptation engine 115, to monitor for occurrence of: mouth movement of a user (Mixter [0042])
Regarding claim 19, claim 19 is similar to claim 6. It is rejected in the same manner and reasoning applying.
26. Claim 14 is rejected under 35 U.S.C 103 as being unpatentable over Smith et al. (US20180232921) in view of Hwang et al. (US20200005539 filed 06/27/2018) in view of Sowden et al. (US20200241716 filed 01/25/2019) and further in view of Menard et al. (US20200020166 filed 07/16/2018)
Regarding claim 14, Modified Smith teaches method of claim 12, Hwang teaches wherein selecting the training focus further includes (Each virtual flair corresponding to a particular gesture may be selected in order to emphasize the gesture, provide additional context to the gesture, and/or otherwise add visual excitement to the AR environment [0097])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Smith to incorporate the teachings of Hwang for the benefit of NED (near eye display) system which is able to present different types of AR (augmented reality) content that may serve to enhance or emphasize the identified gestures (Hwang [0015])
Modified Hwang does not explicitly teach receiving a user input modifying the potential training focus
Menard teaches receiving a user input modifying the potential training focus (receive a user input to modify the one or more physical parameters for the simulation via the input device, [0083]) and
selecting the modified potential training focus as the training focus. (and present the simulation with the modified one or more physical parameters via the display device [0083]) and
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Smith to incorporate the teachings of Menard for the benefit of accurately simulating the physical behavior 42 of the physical object 34 in real-time (Menard [0043]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.G./Examiner, Art Unit 2148
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.G./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/ Supervisory Patent Examiner, Art Unit 2148