Prosecution Insights
Last updated: April 19, 2026
Application No. 18/459,536

COMPUTATIONALLY CUSTOMIZING INSTRUCTIONAL CONTENT

Non-Final OA §103§DP
Filed
Sep 01, 2023
Examiner
CHEN, KUANG FU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
203 granted / 252 resolved
+25.6% vs TC avg
Strong +67% interview lift
Without
With
+67.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
289
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the claims filed on 9/1/2023. Claims 1-20 are presented for examination. Priority Acknowledgment is made of applicant's claim for benefit of a prior-filed parent application no. 17/326,276, now patent US 11,771,977, filed 5/20/2021. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/1/2023 has been considered by the examiner. Specification The disclosure is objected to because of the following informalities: Page 1 CROSS-REFERENCE TO RELATED APPLICATIONS [0001] should be updated to reflect that the current status of parent Application 17/326,276 is now patent No. US 11,771,977. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1, 12, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 12-13 and 18 of U.S. Patent No. 11,771,977 B2 (hereinafter Patent’310). Regarding claim 1 in the table below, the left side contain claim 1 of the Instant Application while the right side contain portions of claim 1 of Patent’977: 18/493,657 (Instant Application) US 11,771,977 B2 (Patent‘977) (Claim 1) A computing system, comprising: a processor; and memory storing instructions that, when executed by the processor, cause the processor to perform acts comprising: streaming instructional media to a client device for presentation at the client device, where the instructional media includes video of a human instructor setting forth audible instructions with respect to an activity being performed by a user of the client device; as the instructional media is being streamed to the client device, obtaining user data that pertains to performance of the activity by the user; generating user-customized media based upon the user data, where the user-customized media includes at least one of: computer-generated video data that includes an image of the instructor; or computer-generated audio data that includes a voice of the instructor; and streaming the user-customized media as part of the instructional media to the client device for presentation at the client device. (Claim 1) A computing system, comprising: a processor; and memory storing instructions that, when executed by the processor, cause the processor to perform acts comprising: causing instructional media to be played to a user over a speaker and a display, wherein a human instructor in the instructional media provides guidance as to how to perform an activity when the instructional media is played, … obtaining user data while the instructional media is played to the user, the user data pertaining to performance of the activity by the user; generating, by a computer-implemented model that is trained based upon audiovisual data of the human instructor, a user-customized portion of the instructional media, where the computer-implemented model generates the user-customized portion of the instructional media based upon the user data; and causing the first portion of the instructional media, the user-customized portion of the instructional media, and the second portion of the instructional media to be played to the user comprises: sending audio data of the user-customized portion of the instructional media to the speaker, wherein based upon the audio data, the speaker emits audible words in a voice of the human instructor; and sending video data of the user-customized portion of the instructional media to the display, wherein based upon the video data, the display displays images of the human instructor depicting the human instructor speaking the audible words as the speaker emits the audible words. Instant Application claim 1 claim limitations of “streaming instructional media to a client device for presentation at the client device” are obvious in light of Patent’977 claim 1 recitations of “causing instructional media to be played to a user over a speaker and a display” because Patent’977’s causing to be played encompasses streaming as the Patent’977's specification explicitly discusses "livestreamed" media, thus, selecting "streaming" as the delivery method is an obvious design choice. Additionally, Patent’977’s “played to a user over a speaker and a display” obviates Instant Application’s “to a client device for presentation at the client device”. Further, Application claim 1 claim limitations of “where the instructional media includes video of a human instructor setting forth audible instructions with respect to an activity being performed by a user of the client device” are obvious in light of Patent’977 claim 1 recitations of “wherein a human instructor in the instructional media provides guidance as to how to perform an activity when the instructional media is played” because instructional media including video and audible instructions are obvious when the instructional media is played over a speaker and a display. Instant Application claim 1 claim limitations of “as the instructional media is being streamed to the client device, obtaining user data that pertains to performance of the activity by the user” are obvious in light of Patent’977 claim 1 recitations of “obtaining user data while the instructional media is played to the user, the user data pertaining to performance of the activity by the user” as said limitations are just restated. Instant Application claim 1 claim limitations of “generating user-customized media based upon the user data, where the user-customized media includes at least one of:” are obvious in light of Patent’977 claim 1 recitations of “generating…a user-customized portion of the instructional media…; and causing…the instructional media to be played to the user comprises:” as these are just minor wording changes. Instant Application claim 1 claim limitations of “computer-generated video data that includes an image of the instructor; or computer-generated audio data that includes a voice of the instructor;” are obvious in light of Patent’977 claim 1 recitations of “sending audio data of the user-customized portion of the instructional media to the speaker, wherein based upon the audio data, the speaker emits audible words in a voice of the human instructor; and sending video data of the user-customized portion of the instructional media to the display” because the Instant Application uses “at least one of” allowing for only audio or only video while parent Patent’977 required the combination and thus Instant Application claiming a sub-combination of a previously patented combination is an obvious variation. Instant Application claim 1 claim limitations of “streaming the user-customized media as part of the instructional media to the client device for presentation at the client device” are obvious in light of Patent’977 claim 1 recitations of “the display displays images of the human instructor depicting the human instructor speaking the audible words as the speaker emits the audible words” because Patent’977 limitations previously recited “causing instructional media to be played to a user over a speaker and a display” thus obviates Instant Application’s streaming the user-customized media as part of the instructional media to the client device for presentation at the client device. Therefore, although the claim 1 of the Instant Application at issue is not identical to claim 1 of Patent’977, claim 1 of the Instant Application is not patently distinct from and is obvious in light of claim 1 of Patent’977. Regarding claim 12 in the table below, the left side contain claim 12 of the Instant Application while the right side contain portions of claims 12-13 of Patent’977: 18/493,657 (Instant Application) US 11,771,977 B2 (Patent‘977) (Claim 12) A method performed by a computing system, the method comprising: streaming instructional media simultaneously to several client devices, where the instructional media includes video of a human instructor setting forth audible instructions with respect to an activity being performed by users of the several client devices; as the instructional media is being streamed to the several client devices, … (Claim 12) A method performed by a processor, comprising: causing instructional media to be played on a device to a user, the device comprising a speaker and a display, wherein a human instructor in the instructional media provides guidance as to how to perform an activity when the instructional media is played on the device, … (Claim 13) The method of claim 12, further comprising: causing the instructional media to be played on a second device to a second user; obtaining second user data pertaining to performance of the activity by the second user; … (Claim 12) … obtaining user data from a client device from amongst the several client devices, where the user data pertains to performance of the activity by a user of the client device; generating customized media for the user based upon the obtained user data, where the customized media for the user comprises computer-generated audiovisual data of the instructor, where the computer-generated audiovisual data pertains to the activity being performed by the user; and streaming the customized media for the user to the client device for presentment to the user as part of the instructional media being streamed to the client device while refraining from streaming the customized media to at least one other client device in the several client devices. (Claim 12) … obtaining user data while the instructional media is played to the user, the user data pertaining to performance of the activity by the user; generating, by a computer-implemented model that is trained based upon audiovisual data of the human instructor, a user-customized portion of the instructional media, where the computer-implemented model generates the user-customized portion of the instructional media based upon the user data; and … sending audio data of the user-customized portion of the instructional media to the speaker, wherein the audio data of the user-customized portion of the instructional media comprises audible words in a voice of the human instructor; sending video data of the user-customized portion of the instructional media to the display, … (Claim 13) … and further wherein the user-customized portion of the instructional media and the second user-customized portion of the instructional media are different from one another. Instant Application claim 12 claim limitations of “A method performed by a computing system, the method comprising” are obvious in light of Patent’977 claim 12 recitations of “A method performed by a processor, comprising” because a computing system is an obvious variation of a processor. Instant Application claim 12 claim limitations of “streaming instructional media simultaneously to several client devices, where the instructional media includes video of a human instructor setting forth audible instructions with respect to an activity being performed by users of the several client devices; as the instructional media is being streamed to the several client devices” are obvious in light of Patent’977 claims 12 and 13’s combined recitations of “causing instructional media to be played on a device to a user, the device comprising a speaker and a display, wherein a human instructor in the instructional media provides guidance as to how to perform an activity when the instructional media is played on the device” and “causing the instructional media to be played on a second device to a second user; obtaining second user data pertaining to performance of the activity by the second user” because streaming instructional media simultaneously to several client devices are obvious with playing the instruction media on both a device of a user and a second device to a second user both performing the same activity when the instructional media is played. Instant Application claim 12 claim limitations of “obtaining user data from a client device from amongst the several client devices, where the user data pertains to performance of the activity by a user of the client device” are obvious in light of Patent’977 claim 12 recitations of “obtaining user data while the instructional media is played to the user, the user data pertaining to performance of the activity by the user” because a client device would be obvious given while the instruction media played to the user on a device to the user as defined in the prior claim 12 limitations of Patent’977. Instant Application claim 12 claim limitations of “generating customized media for the user based upon the obtained user data, where the customized media for the user comprises computer-generated audiovisual data of the instructor” are obvious in light of Patent’977 claim 12 recitations of “where the computer-implemented model generates the user-customized portion of the instructional media based upon the user data” and “generating, by a computer-implemented model that is trained based upon audiovisual data of the human instructor, a user-customized portion of the instructional media”. Additionally, Instant Application claim 12 claim limitations of “where the computer-generated audiovisual data pertains to the activity being performed by the user” are obvious in light of Patent’977 claim 12 recitations of “generating, by a computer-implemented model that is trained based upon audiovisual data of the human instructor, a user-customized portion of the instructional media, where the computer-implemented model generates the user-customized portion of the instructional media based upon the user data” and Patent’977 claim 12 prior recitations of “obtaining user data while the instructional media is played to the user, the user data pertaining to performance of the activity by the user” because Patent’977 claim 12’s disclosure requires the user data to pertain to performance of the activity while the instructional media is played and for the model to generate portion of the instructional media which obviates “where the computer-generated audiovisual data pertains to the activity being performed by the user” in claim 12 of the Instant Application. Instant Application claim 12 claim limitations of “streaming the customized media for the user to the client device for presentment to the user as part of the instructional media being streamed to the client device” are obvious in light of Patent’977 claim 12 recitations of “sending audio data of the user-customized portion of the instructional media to the speaker, wherein the audio data of the user-customized portion of the instructional media comprises audible words in a voice of the human instructor; sending video data of the user-customized portion of the instructional media to the display”. Instant Application claim 12 claim limitations of “while refraining from streaming the customized media to at least one other client device in the several client devices” are obvious in light of Patent’977 claim 13 recitations of “and further wherein the user-customized portion of the instructional media and the second user-customized portion of the instructional media are different from one another” because refraining from streaming the same customized media to at least one other client device is obvious in light of the limitation that different customized instructional media are generated for different user and thus the server would refrain from sending the same customized media to every user. Therefore, although the claim 12 of the Instant Application at issue is not identical to claims 12 and 13 of Patent’977, claim 12 of the Instant Application is not patently distinct from and is obvious in light of claims 12 and 13 of Patent’977. Regarding claim 20 in the table below, the left side contain claim 20 of the Instant Application while the right side contain portions of claims 18 of Patent’977: 18/493,657 (Instant Application) US 11,771,977 B2 (Patent‘977) (Claim 20) A computer-readable storage medium comprising instructions that, when executed by a processor, cause the processor to perform acts comprising: streaming instructional media to a client device for presentation at the client device, where the instructional media includes video of a human instructor setting forth audible instructions with respect to an activity being performed by a user of the client device; as the instructional media is being streamed to the client device, obtaining user data that pertains to performance of the activity by the user; generating user-customized media based upon the user data, where the user-customized media includes at least one of: computer-generated video data that includes an image of the instructor; or computer-generated audio data that includes a voice of the instructor; and streaming the user-customized media as part of the instructional media to the client device for presentation at the client device. (Claim 18) A computer-readable storage medium comprising instructions that, when executed by a processor of a computing system, perform acts comprising: playing instructional media on a device to a user, the device comprising a speaker and a display, wherein a human instructor in the instructional media provides guidance as to how to perform an activity when the instructional media is played on the device, and further wherein the instructional media comprises a first portion and a second portion; while the instructional media is being played, obtaining user data pertaining to performance of the activity by the user; generating…a user-customized portion of the instructional media, wherein the computer-implemented model generates the user-customized portion of the instructional media based upon the user data, and further wherein the user-customized portion of the instructional media comprises: audible words in a voice of the human instructor; and images of the human instructor that depict the human instructor emitting the audible words; and playing the first portion of the instructional media, the user-customized portion of the instructional media, and the second portion of the instructional media on the device to the user, … Instant Application claim 20 claim limitations of “streaming instructional media to a client device for presentation at the client device” are obvious in light of Patent’977 claim 18 recitations of “playing instructional media on a device to a user, the device comprising a speaker and a display”. Instant Application claim 20 claim limitations of “where the instructional media includes video of a human instructor setting forth audible instructions with respect to an activity being performed by a user of the client device; as the instructional media is being streamed to the client device obtaining user data that pertains to performance of the activity by the user” are obvious in light of Patent’977 claim 18 recitations of “wherein a human instructor in the instructional media provides guidance as to how to perform an activity when the instructional media is played on the device…; while the instructional media is being played, obtaining user data pertaining to performance of the activity by the user” because as the instructional media is being streamed to the client device while obtaining user data that pertains to performance of the activity by the user is obvious while the instructional media is being played to obtain user data pertaining to performance of the activity by the user. Instant Application claim 20 claim limitations of “generating user-customized media based upon the user data, where the user-customized media includes at least one of:” are obvious in light of Patent’977 claim 18 recitations of “generating…a user-customized portion of the instructional media, wherein the computer-implemented model generates the user-customized portion of the instructional media based upon the user data, and further wherein the user-customized portion of the instructional media comprises:”. Instant Application claim 20 limitations of “computer-generated video data that includes an image of the instructor; or computer-generated audio data that includes a voice of the instructor” are obvious in light of Patent’977 claim 18 recitations of “audible words in a voice of the human instructor; and images of the human instructor that depict the human instructor emitting the audible words”. Instant Application claim 20 limitations of “streaming the user-customized media as part of the instructional media to the client device for presentation at the client device” are obvious in light of Patent’977 claim 18 recitations of “playing the first portion of the instructional media, the user-customized portion of the instructional media, and the second portion of the instructional media on the device to the user” because Instant Application claim 20’s said limitations are broader. Therefore, although the claim 20 of the Instant Application at issue is not identical to claim 18 of Patent’977, claim 20 of the Instant Application is not patently distinct from and is obvious in light of claim 18 of Patent’977. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 12-13, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Song et al. (hereinafter Song), US 2014/0120994 A1, in view of Russell et al. (hereinafter Russell), US 2022/0072377 A1. Regarding independent claim 12, Song teaches a method performed by a computing system (FIG. 18, [0205]-[0207] virtual golf simulation apparatus connected to a server S through a network for performing golf simulations), the method comprising: streaming instructional media simultaneously to several client devices ([0083], [0098], [0194], [0208] suggest the customized lesson provision means 300 generates customized lesson content based on the analysis result of the shot analysis means and provides customized lesson content (streaming instructional media) to the user at a client controller M in FIG. 3 wherein personal information of users registered in the server S implies multiple users can access customized lesson content should their score require it simultaneously in a server/multiple client network), where the instructional media includes video of a human instructor setting forth audible instructions with respect to an activity being performed by users of the several client devices ([0191], [0194-0195], [0208], [0239] suggest wherein the customized lesson content includes image and voice of a virtual lesson pro, including a real lesson pro or pro golfer, (where the instruction media includes video of a human instructor setting forth audible instructions) with respect to user performing golf shots of the client device sensed by shot analysis means 212 (with respect to an activity being performed) wherein the server S can service multiple registered users in the server/multiple client network simultaneously (by users of the several client devices)); obtaining user data from a client device from amongst the several client devices ([0208], [0237] suggest that result of the golf shot taken by the user is analyzed based on the user’s registration amongst the multiple users on the server S); and streaming media for the user to the client device for presentment to the user as part of the instructional media being streamed to the client device while refraining from streaming the customized media to at least one other client device in the several client devices ([0191], [0194-0195], [0208], [0239] suggest transmitting the customized lesson content as part of golf instruction content to the registered user’s device different from the customized lesson content for other registered users). Song does not expressly teach as the instructional media is being streamed to the several client devices, obtaining user data from a client device from amongst the several client devices, where the user data pertains to performance of the activity by a user of the client device; generating customized media for the user based upon the obtained user data, where the customized media for the user comprises computer-generated audiovisual data of the instructor, where the computer-generated audiovisual data pertains to the activity being performed by the user. However, Russell teaches as instructional media is being streamed to several client devices ([0025], [0030] and FIG. 4 suggests a screen displaying the start of an exercise program being displayed on the screen of a client device (as instructional media is being streamed) wherein the invention calibrate for different people suggesting multiple client devices (to several client devices)), obtaining user data from a client device from amongst the several client devices ([0025], [0030] and FIG. 4 suggest at the start of an exercise program scanning user’s environment to determine which subsequent depth information belongs to the user from amongst the other people being transmitted the exercise program), wherein the user data pertains to performance of the activity by a user of the client device ([0030] and FIG. 4 suggest at the start of an exercise program scanning user’s environment to determine which subsequent depth information for the exercise program that belongs to the user); generating customized media for the user based upon the obtained user data ([0028], [0030] suggest based on the depth information belonging to the user, the system can provide feedback based on accurate body movements and to provide new instructions as needed), where the customized media for the user comprises computer-generated audiovisual data of an instructor ([0028], [0030] wherein the feedback for the user comprises avatar audiovisual feedback instructions), where the computer-generated audiovisual data pertains to the activity being performed by the user ([0028], [0030] wherein the feedback for the user pertains to the exercise program being performed by the user). Because Song and Russell address customizing presented content to a user from amongst other users of a transmitted content, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings wherein as instructional media is being streamed to several client devices, obtaining user data from a client device from amongst the several client devices, wherein the user data pertains to performance of the activity by a user of the client device; generating customized media for the user based upon the obtained user data, where the customized media for the user comprises computer-generated audiovisual data of an instructor, where the computer-generated audiovisual data pertains to the activity being performed by the user as suggested by Russell into Song’s virtual golf simulation apparatus, with a reasonable expectation of success, such that such that virtual golf simulation apparatus can determine baseline depth of the user from amongst the registered users on the server S as the customized lesson content is started on the client device of the user to generate feedback on the user’s golf shots based on the depth info and transmit new instructions as part of the customized lesson to the client device for presentation, wherein the transmitted new instructions are communicated via audiovisual means of an avatar of the instruction, and wherein the transmitted new instructions pertain to the golf shot practice of the user wherein each user would be presented with different customized content new instructions based on their user registration with the server to teach as the instructional media is being streamed to the several client devices, obtaining user data from a client device from amongst the several client devices, where the user data pertains to performance of the activity by a user of the client device; generating customized media for the user based upon the obtained user data, where the customized media for the user comprises computer-generated audiovisual data of the instructor, where the computer-generated audiovisual data pertains to the activity being performed by the user; and streaming the customized media for the user to the client device for presentment to the user as part of the instructional media being streamed to the client device while refraining from streaming the customized media to at least one other client device in the several client devices. This modification would have been motivated by the desire to provide more efficient tools for conducting fitness (Russell [0012]). Regarding dependent claim 13, Song, in view of Russell, teach the method of claim 12, where the audiovisual data of the instruction comprises computer-generated images of the instructor and computer-generated audio in a voice of the instructor (see Song [0195] the generated customized lesson content includes generated lesson content form the server including video image and the voice of the virtual lesson pro). Regarding dependent claim 17, Song, in view of Russell, teach the method of claim 12, where the instructional media comprises a first portion and a second portion, where the first portion is streamed to the client device prior to the user-customized media being streamed to the client device (see Song [0194] because the customized lesson content is part of results of past shot analysis previously performed the past shot analysis performed with practice curriculum streamed to user’s client device), and further where the second portion is streamed to the client device after the user-customized media is streamed to the client device. Regarding dependent claim 19, Song, in view of Russell, teach the method of claim 12, where the client devices are pieces of exercise equipment (see Song ABSTRACT, [0194] virtual golf apparatus used by the user to perform golf shots). Claims 1-10, 14, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Song, in view of Russell, and further in view of Theis et al. (hereinafter Theis), US 10,552,977 B1. Regarding independent claim 1, Song teaches a computing system (FIG. 18, [0205]-[0207] virtual golf simulation apparatus connected to a server S through a network), comprising: a processor (FIG. 18, [0207] server S processor 600); and memory storing instructions that, when executed by the processor, cause the processor to perform acts comprising ([0209], [0212] database 500 storing information constituting customized lesson content to be provided by virtual golf simulation apparatus. Processor 600 extract requested information from the database 500 and performs overall functions): streaming instructional media to a client device for presentation at the client device ([0083], [0098], [0194] suggest the customized lesson provision means 300 generates customized lesson content based on the analysis result of the shot analysis means and provides the generated customized lesson content (streaming instructional media) to the user at a client controller M (to a client device for presentation at the client device) in FIG. 3), where the instructional media includes video of a human instructor setting forth audible instructions with respect to an activity being performed by a user of the client device ([0191], [0194-0195], [0239] suggest wherein the customized lesson content includes image and voice of a virtual lesson pro, including a real lesson pro or pro golfer, (where the instruction media includes video of a human instructor setting forth audible instructions) with respect to user performing golf shots of the client device sensed by shot analysis means 212 (with respect to an activity being performed by a user of the client device)); obtaining user data that pertains to performance of the activity by the user ([0237] suggest that result of the golf shot taken by the user is analyzed). Song does not expressly teach as the instructional media is being streamed to the client device, obtaining user data that pertains to performance of the activity by the user; generating user-customized media based upon the user data; and streaming the user-customized media as part of the instructional media to the client device for presentation at the client device. However, Russell teaches as instructional media is being streamed to a client device ([0030] and FIG. 4 suggests a screen displaying the start of an exercise program being displayed on the screen of a client device (as instructional media is being streamed to a client device)), obtaining user data that pertains to performance of an activity by a user ([0030] and FIG. 4 suggest at the start of an exercise program scanning user’s environment to determine which subsequent depth information belongs to the user); generating user-customized media based upon the user data ([0028], [0030] suggest based on the depth information belonging to the user, the system can provide feedback based on accurate body movements and to provide new instructions as needed); and streaming the user-customized media as part of the instructional media to the client device for presentation at the client device ([0028], [0030] suggest any new instructions as part of the therapy exercise program will be displayed on the screen of the client device such as shown in FIG. 4). Because Song and Russell address the issue of providing avatar feedback to the user regarding performance of an activity, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings wherein as instructional media is being streamed to a client device, obtaining user data that pertains to performance of an activity by a user; generating user-customized media based upon the user data; and streaming the user-customized media as part of the instructional media to the client device for presentation at the client device as suggested by Russell into Song’s computing system, with a reasonable expectation of success, such that virtual golf simulation apparatus can determine baseline depth of the user as the customized lesson content is started on the client device of the user to generate feedback on the user’s golf shots based on the depth info and transmit new instructions as part of the customized lesson to the client device for presentation to teach as the instructional media is being streamed to the client device, obtaining user data that pertains to performance of the activity by the user; generating user-customized media based upon the user data; and streaming the user-customized media as part of the instructional media to the client device for presentation at the client device. This modification would have been motivated by the desire to provide more efficient tools for conducting fitness (Russell [0012]). Song and Russell do not expressly teach where the user-customized media includes at least one of: computer-generated video data that includes an image of the instructor; or computer-generated audio data that includes a voice of the instructor. However, Theis teaches user-customized media includes at least one of: computer-generated video data that includes an image of a person (1:23-48 suggest generates a swapped image that is photorealistic and scalable for real time production (user-customized media). The swapped image performed by identify transformation network retains the expression/pose of the target image but incorporates the style, or facial features, of the source identity of a person where the output is a frame in a video stream (includes at least one of computer-generated video data that includes an image of a person)). Because Song, in view of Russell, and Theis address the issue of displaying a human in a video, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings wherein user-customized media includes at least one of: computer-generated video data that includes an image of a person as suggested by Theis into Song and Russell’s virtual golf simulation apparatus such that the customized lesson content with the inclusion of the golf pro can include at least a computer generated video that includes a generated face for golf pro to teach where the user-customized media includes at least one of: computer-generated video data that includes an image of the instructor; or computer-generated audio data that includes a voice of the instructor. This modification would have been motivated by the desire to provide a photorealistic face for creative application and including provision of privacy (Theis 1:5-20). Regarding dependent claim 2, Song, in view of Russell and Theis, teach the computing system of claim 1, where the instructional media includes a first portion and a second portion, where the first portion is streamed to the client device prior to the user-customized media being streamed to the client device (see Song [0194] because the customized lesson content is part of results of past shot analysis previously performed the past shot analysis performed with practice curriculum streamed to user’s client device), and further where the second portion is streamed to the client device after the user-customized media is streamed to the client device (see Song [0194] because the customized lesson content is the second portion streamed after the user custom lessons are streamed from virtual pro). Regarding dependent claim 3, Song, in view of Russell and Theis, teach the computing system of claim 1, where the user-customized media includes the computer-generated video data and the computer-generated audio data (see Song [0195] the generated customized lesson content includes generated lesson content form the server including video image and the voice of the virtual lesson pro). Regarding dependent claim 4, Song, in view of Russell and Theis, teach the computing system of claim 1, where the user-customized media is generated as the instructional media is being streamed to the client device (see Song [0194-0195] the generated customized lesson content includes generated lesson content form the server including video image and the voice of the virtual lesson pro provided to the user’s client device). Regarding dependent claim 5, Song, in view of Russell and Theis, teach the computing system of claim 1, where the computer-generated video data is generated by a computer-implemented model that is trained based upon video data of the human instructor (see Theis 1:24-67 where the trained neural network generates video stream trained based on faces of source identity). Regarding dependent claim 6, Song, in view of Russell and Theis, teach the computing system of claim 1, where the computer-generated audio data is generated by a computer-implemented model that is trained based upon audio data that captures the voice of the human instructor (see Russell [0043] avatar audio from text to speech). Regarding dependent claim 7, Song, in view of Russell and Theis, teach the computing system of claim 1, where streaming the instructional media to the client device comprises livestreaming the instructional media to the client device (see Theis 3:29-43 system used in video streaming applications). Regarding dependent claim 8, Song, in view of Russell and Theis, teach the computing system of claim 1, where the client device is a piece of exercise equipment being employed by the user to perform the activity (see Song ABSTRACT, [0194] virtual golf apparatus used by the user to perform golf shots). Regarding dependent claim 9, Song, in view of Russell and Theis, teach the computing system of claim 8, where the user data comprises data output by a sensor of the exercise equipment (See Song [0053] user data comprise data output by sensing device 10 of virtual golf apparatus). Regarding dependent claim 10, Song, in view of Russell and Theis, teach the computing system of claim 1, where the user-customized media comprises the computer-generated audio data, and further where the computer-generated audio data comprises a name of the user (See Russell [0029] avatar encourages by “talking” feedback to the person exercising suggests comprising encouraging person by name). Regarding dependent claim 14, Song, in view of Russell, teach all the elements of claim 13. Song and Russell do not expressly teach where the computer-generated images of the instructor are generated by a computer-implemented model that has been trained based upon video of the instructor. However, Theis teach where the computer-generated images of the instructor are generated by a computer-implemented model that has been trained based upon video of the instructor (1:23-56 face swap in video of person trained based on source videos of person). Because Song, in view of Russell, and Theis address the issue of displaying a human in a video, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings where the computer-generated images of the instructor are generated by a computer-implemented model that has been trained based upon video of the instructor as suggested by Theis into Song and Russell’s virtual golf simulation apparatus. This modification would have been motivated by the desire to provide a photorealistic face for creative application and including provision of privacy (Theis 1:5-20). Regarding dependent claim 18, Song, in view of Russell, teach all the elements of claim 12. Song and Russell do not expressly teach where streaming the instructional media to the several client devices comprises livestreaming the instructional media to the several client devices. However, Theis teach where streaming the instructional media to the several client devices comprises livestreaming the instructional media to the several client devices (see Theis 3:29-43 system used in video streaming applications). Because Song, in view of Russell, and Theis address the issue of displaying a human in a video, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings where streaming the instructional media to the several client devices comprises livestreaming the instructional media to the several client devices as suggested by Theis into Song and Russell’s virtual golf simulation apparatus. This modification would have been motivated by the desire to provide a photorealistic face for creative application and including provision of privacy (Theis 1:5-20). Regarding independent claim 20, claim 20 is a computer-readable storage medium claim that is substantially the same as claim 1. Thus, claim 20 is rejected for the same reason as claim 1. In addition, Song teaches a computer-readable storage medium comprising instructions that, when executed by a processor, cause the processor to perform acts comprising ([0209], [0212] database 500 storing information constituting customized lesson content to be provided by virtual golf simulation apparatus. Processor 600 extract requested information from the database 500 and performs overall functions). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Song, in view of Russell and Theis, and further in view of Yoo, US 2002/0042328 A1. Regarding dependent claim 11, Song, in view of Russell and Theis, teach all the elements of claim 1. Song, Russell and Theis do not expressly teach the computing system of claim 1, where the user data comprises heart rate of the user. However, Yoo teaches where the user data comprises heart rate of the user ([0012] monitor user’s heart rate). Because Song, in view of Russell and Theis, and Yoo address physical activities performed by a user, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings where the user data comprises heart rate of the user as suggested by Yoo into Song, Russell, and Theis’ virtual golf simulation apparatus, with a reasonable expectation of success, such that such that virtual golf simulation apparatus monitor the user’s heart rate. This modification would have been motivated by the desire to support the user’s health management (Yoo [0006]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUANG FU CHEN whose telephone number is (571)272-1393. The examiner can normally be reached M-F 9:00-5:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KC CHEN/Primary Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Nov 29, 2025
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579425
PARAMETERIZED ACTIVATION FUNCTIONS TO ADJUST MODEL LINEARITY
2y 5m to grant Granted Mar 17, 2026
Patent 12566994
SYSTEMS AND METHODS TO CONFIGURE DEFAULTS BASED ON A MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12561593
METHOD FOR DETERMINING PRESENCE OF A SIGNATURE CONSISTENT WITH A PAIR OF MAJORANA ZERO MODES AND A QUANTUM COMPUTER
2y 5m to grant Granted Feb 24, 2026
Patent 12561561
Mapping User Vectors Between Embeddings For A Machine Learning Model for Authorizing Access to Resource
2y 5m to grant Granted Feb 24, 2026
Patent 12561497
AUTOMATED OPERATING MODE DETECTION FOR A MULTI-MODAL SYSTEM WITH MULTIVARIATE TIME-SERIES DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+67.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month