DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/30/2025 has been entered.
Response to Arguments
Applicant's arguments filed 09/30/2025 have been fully considered but they are not persuasive.
Regarding double patenting rejection, the applicant does not amend claims to overcome the double patenting rejection. The applicant does not argue the double patenting rejection. The applicant requests the double patenting rejection be held in abeyance until it is the only remaining rejection. Therefore, the examiner maintains the double patenting rejection.
Regarding to claim 1, the applicant admits “Song, Rose, and Kosowsky, whether considered alone or in combination teach or suggest Applicant’s amended claims. For example, claim 1 as currently amended recites, “identify pre-shot routines where a user swings but does not hit a ball, wherein a pre-shot routine comprises a set of actions performed by the user prior to an actual swing in which the user hits the ball” in page 12 of the Remark received 09/30/2025.
The examiner recognizes Rose teaches “identify pre-shot routines where a user swings but does not hit a ball, wherein a pre-shot routine comprises a set of actions performed by the user prior to an actual swing in which the user hits the ball”.
Rose discloses “identify pre-shot routines where the user swings but does not hit a ball”. For example, in paragraph [0023], Rose teaches optimizing a golf swing in a pre-shot routine by measuring and providing feedback. In paragraph [0042], Rose teaches during practice exercises; Rose further teaches testing and analysis. In paragraph [0047], Rose teaches the initial conditions of the testing session include pre-shot routine. In Fig. 1 and paragraph [0048], Rose teaches the beginning of backswing, initial downswing, mid-downswing, and late downswing are pre-shot routines; Rose further teaches the user does not hit a ball in these actions;
PNG
media_image1.png
471
627
media_image1.png
Greyscale
. In Fig. 2A and paragraph [0058], Rose teaches downswing was initiated by the reversal of pelvic rotation followed by a reversal of upper torso rotation; Rose further teaches these actions do not include hitting the golf ball.
PNG
media_image2.png
84
344
media_image2.png
Greyscale
; Rose further more teaches backswing, downswing and upper torso rotation are pre-shot routines, but not hit a ball,
Rose further discloses “wherein a pre-shot routine comprises a set of actions performed by the user prior to an actual swing in which the user hits the ball”. For example, in paragraph [0023], Rose teaches optimizing a golf swing in a pre-shot routine by measuring and providing feedback. In paragraph [0042], Rose teaches practice exercises; Rose further teaches testing and analysis. In paragraph [0047], Rose teaches calibration procedures are customized to the specific types of sensors being used in the testing environment; Rose further teaches the sensors are adjusted based on the initial conditions of the testing session; Rose further more teaches the initial conditions of the testing session include pre-shot routine. In paragraph [0054], Rose teaches the rising clubhead initiated backswing is a set of actions; Rose further teaches in these actions, the user does not hit the ball; Rose further more teaches these actions are prior to hitting ball actions, i.e. prior to actual swing. In Fig. 2A and paragraph [0058], Rose teaches backswing, downswing and upper torso rotation are a set of actions; Rose further teaches in these actions, the user does not hit a ball; Rose further more teaches these actions are prior to hitting ball actions, i.e. prior to actual swing.
Regarding to claim 1, the applicant argues that cited arts fail to teach or suggest “a machine learning module configured to determine a user’s standard practice routine over time and determine when a new practice routine deviates from the standard practice routine”. The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
Kosowsky discloses “a machine learning module configured to determine a user's standard practice routine over time and determine when a new practice routine deviates from the standard practice routine”. For example, in paragraph [0057], Kosowsky teaches allowing the user to apply statistical analysis to compare their motions to the motions of other individuals or to the aggregated motions of a set of other individuals. In paragraph [0101], Kosowsky teaches the golfer may want to see how many times their head or part of the body stayed within the boundaries for swings of a particular type of club each of the preceding sessions. In paragraph [0102], Kosowsky teaches the user compares the visualizations of their good swings with their bad swings to help in their training. In Fig. 10 and paragraph [0103], Kosowsky teaches using machine learning algorithms, models 1003 or 1004 are computed that will predict the qualitative result—good or bad 1005—or the quantitative result—expected distance 1006 the golf ball will travel—given the series of timestamps and target positions 1001 or 1002; Kosowsky further teaches machine learning algorithms determine a golf swing is good, i.e., standard practice routine; Kosowsky further more teaches machine learning algorithms determine a golf swing is bad, i.e., deviates from the standard practice routine. In Fig. 11 and paragraph [0104], Kosowsky teaches applying machine learning to predict the result of a swing from the series of target positions 1001 and 1002; Kosowsky further teaches applying machine learning to compute models 1103 or 1104 to yield qualitative 1105 or quantitative 1106 results from such sound recordings 1101 and 1102; Kosowsky further more teaches the model maps sound recordings to swing results and the user is informed, just after the swing, how good the swing was.
Regarding to claim 1, the applicant argues that cited arts fail to teach or suggest “the visual representations comprise user-selected kinematic parameters with a 3D avatar of the swing, animated to cause movement of the 3D avatar displayed on the user device.” The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
Rose discloses “wherein the visual representations comprise user-selected kinematic parameters with a 3D avatar of the swing, animated to cause movement of the 3D avatar displayed on the user device”. For example, in paragraph [0034], Rose teaches providing values for a plurality of parameters, and the parameters are analyzed and compared to target values for these parameters; Rose further teaches selecting and measuring body and club movements associated with a golf swing. In paragraph [0047], Rose teaches calibration routines for an accelerometer attached to a body segment identify the position and orientation of the sensors relative to the segment. In paragraph [0048], Rose teaches producing and selecting the desired parameter. In Fig. 2A, Fig. 2B, and paragraph [0049], Rose teaches the graphical display includes images of an avatar performing the movement; Rose further teaches displaying a three-dimensional animation such that body movements can be seen from different points of view in a three-dimensional space; Rose further more teaches the graphical display may include videos of other subjects performing ideal or non-ideal golf swings;
PNG
media_image3.png
92
434
media_image3.png
Greyscale
.
Song discloses “wherein the visual representations comprise user-selected kinematic parameters with a 3D avatar of the swing, animated to cause movement of the 3D avatar displayed on the user device”. For example, in paragraph [0048], Song teaches the electronically captured images 108 of the subject 103 performing the activity 104 are automatically converted to electronic data and models by processors. In Fig. 1 and paragraph [0050], Song teaches If the activity were playing golf, the predefined reference locations include the club head and club shaft; Song further more teaches kinematic parameters include predefined reference locations. In Fig. 6 and paragraph [0063], Song teaches causing the depiction 606 of the standard to appear superimposed atop a depiction 607 of the subject performing the activity in the electronically altered image 601. In Fig. 8 and paragraph [0066], Song teaches the audio data 802 instructs the subject to move a predefined feature of the subject toward a predefined standard reference location. In Fig. 8 and paragraph [0068], Song teaches the subject is instructed to decrease the elbow angle to move its subject reference location of the elbow toward the standard reference location of the standard; Song teaches further geometric alignment 805 refers to the right knee, and indicates the subject has a knee bend of eight degrees, while the standard has a knee bend of twenty-five degrees; Song further more teaches kinematic parameters includes movements and angles;
PNG
media_image4.png
441
333
media_image4.png
Greyscale
. In Fig. 1 and paragraph [0070], Song teaches the one or more standard reference locations 128,129,130 are selected parameters. In Fig. 2 and paragraph [0082], Song teaches these predefined features 218 are selected as reference locations of both the subject and the standard; Song further teaches a golfer prefers predefined reference locations such as the club head, club shaft, grip, hands, elbows, shoulders, head, hips, knees, and feet. In paragraph [0108], Song teaches a video output component displays sequence of images. In Fig. 1, Fig. 10, and paragraph [0130], Song teaches superimposing a representation 1003 of the standard upon the subject 1004 in the one or more three-dimensional images 1001 of a subject 103 performing an activity 104;
PNG
media_image5.png
465
706
media_image5.png
Greyscale
; Song further teaches the 3D avatar of piano player, golfer, and yoga player are tailored. In paragraph [0131], Song teaches the one or more processors of the local or remote electronic device electronically alter the one or more three-dimensional images 1001 of a subject 103 performing an activity 104 to identify the differences between the at least one standard reference location and the at least one corresponding subject reference location in one or more electronically altered three-dimensional images 1005. In paragraph [0132], Song teaches the at least one standard reference location and the at least one corresponding subject reference location. In paragraph [0137], Song teaches instructing the subject to move a predefined feature of the subject toward a predefined standard reference location to appear in the one or more electronically captured images.
Regarding to claim 4, the applicant argues the cited arts fail to teach or suggest “a watch list engine configured to enable a user to select via a user interface one or more golf-specific, kinematic parameters and sequences of motion during the swing, and to correlate results of the watch list engine with additional context data to affect what information is presented to the user, wherein additional context data comprises non-kinematic information that is used to interpret performance and tailor feedback, visuals, or recommendations presented to the user; and alter visual representations based on the correlated results of the watch engine list with additional context data”. The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
“or” is optional;
Kosowsky discloses “provide a watch list engine configured to enable a user to select via a user interface one or more golf-specific, kinematic parameters and sequences of motion during the swing and to correlate results of the watch list engine with additional context data to affect what information is presented to the user”. For example, in Fig. 2C and paragraph [0072], Kosowsky teaches the head 2c02 of the golfer 2c01 and the knee 2c04 of the golfer 2c01 are selected and tracked with a user interface, i.e. black and highlighted dot points;
PNG
media_image6.png
82
212
media_image6.png
Greyscale
; Kosowsky further teaches the golfer is alerted when either the head 2c02 moves outside the boundaries depicted as rectangle 2c03, or the knee 2c04 moves outside the boundaries depicted as rectangle 2c05; Kosowsky further more teaches generating various alerts for various sets of combinations of targets crossing boundaries. In paragraph [0080], Kosowsky teaches the system provides for the user to specify any of these factors through a user interface on a smartphone. In Fig. 5 and paragraph [0084], Kosowsky teaches the user indicates their skill level and the swing they would like to practice by using a user interface on a smartphone. In Fig. 13 and paragraph [0106], Kosowsky teaches providing a user interface 1307 to interact with the user. In Fig. 17, Fig. 18, and paragraph [0119], Kosowsky teaches determining the most salient fault affecting a user's performance across a set of activities; Kosowsky further teaches this sorted list is presented to the user to inform them of the most important factors that affected their performance in the session.
Kosowsky further discloses “wherein additional context data comprises non-kinematic information that is used to interpret performance and tailor feedback, visuals, or recommendations presented to the user”. For example, in Fig. 17, Fig. 18, and paragraph [0119], Kosowsky teaches determining the most salient fault affecting a user's performance across a set of activities; Kosowsky further teaches this sorted list is presented to the user to inform them of the most important factors that affected their performance in the session; Kosowsky further more teaches suggesting the user try to alter his motion to eliminate the most severe fault. Kosowsky suggests providing feedback to the user as to the user's progress in addressing the focus or multiple foci over multiple performances of the activity; motion and focus are non-kinematic information.
Kosowsky further more discloses “alter visual representations based on the correlated results of the watch engine list with additional context data”. For example, in paragraph [0130], Kosowsky teaches if the target is not found within the desired limits in the image, then the user is told, at step 2008, to either move back from or forward towards the smartphone; Kosowsky further teaches if the location of the target is too close to the top of the image, the user is told to move back.
Song discloses “alter visual representations based on the correlated results of the watch engine list with additional context data”. For example, in Fig. 8 and paragraph [0066], Song teaches the electronically altered image 801; Song further teaches outputting the differences between the one or more standard reference locations and the one or more corresponding subject reference locations. In Fig. 8 and paragraph [0067], Song teaches identifying the differences between the at least one standard reference location and the at least one corresponding subject reference location to appear in the electronically altered image.
Regarding to claim 9, the applicant argues that cited arts fail to teach or suggest “display a graphical user interface with options to enable creation of one or more motion trackers comprising specific kinematic parameters comprising higher order derivatives of position with respect to time and sequences of motions”. The arguments have been fully considered. The argument according “comprising higher order derivatives of position with respect to time” is persuasive. Therefore, the 35 U.S.C rejection has been withdrawn. However, upon further consideration, new grounds of rejection are made in newly applied art. The argument according “display a graphical user interface with options to enable creation of one or more motion trackers comprising specific kinematic parameters comprising derivatives respect to time and sequences of motions” is not persuasive. The examiner cannot concur with the applicant for following reasons:
Rose discloses “to enable creation of one or more motion trackers comprising specific kinematic parameters comprising derivatives and sequences of motions”. For example, in paragraph [0028], Rose teaches the derivative with respect to time of the upper torso rotation angle. In paragraph [0030], Rose teaches pelvic rotational velocity; Rose further teaches the derivative with respect to time of the pelvic rotation angle. In paragraph [0038], Rose teaches mechanical motion capture systems directly track body joint angles. In paragraph [0045], Rose teaches obtaining values for specific descriptive parameters, such as, pelvic and shoulder tilt, the relative difference between the rotation of the hips and the shoulders, free moment, and position of the head. In paragraph [0049], Rose teaches the graphical display includes videos of other subjects performing ideal or non-ideal golf swings with explanations and comparisons to the current subject's movement. In paragraph [0053], Rose teaches kinematic data were collected using an eight-camera optometric system for three-dimensional motion analysis at a sampling rate of 240 Hz; Rose further teaches the motion capture system.
Kosowsky discloses “display a graphical user interface with options to enable creation of one or more motion trackers”. For example, in paragraph [0004], Kosowsky teaches allowing a user to set boundaries around the motion of the head. In paragraph [0057], Kosowsky teaches the boundaries are user selectable. In paragraph [0059], Kosowsky teaches the user is able to set a threshold for movement in terms of real-world distance. In Fig. 14B and paragraph [0107], Kosowsky teaches displaying a set of boundaries, the positioning of said boundaries determined by a set of selectable options chosen by the user. In Fig. 15B and paragraph [0110], Kosowsky teaches displaying a button with a share icon 1512.
Russo discloses “display a graphical user interface with options to enable creation of one or more motion trackers”. For example, in col. 5, lines 45-55, Russo teaches one analyst selects particular athletes to track. In col. 8, lines 5-15, Russo teaches an icon is a graphical data representation 166; Russo further teaches by touching one or more particular icons 166 on the screen with a finger or light pen, the analyst selects one or more athletes for further tracking.
Regarding to claim 11, the applicant argues that cited arts fail to teach or suggest “display a graphical user interface with options to enable creation of one or more motion trackers comprising specific kinematic parameters and sequences of motions, the kinematic parameters comprising position, velocity, acceleration, and/or higher-order derivatives of the position with respect to time”. The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
The language “/or higher-order derivatives of the position with respect to time” in the claim is optional.
Rose discloses “to enable creation of one or more motion trackers comprising specific kinematic parameters and sequences of motions, the kinematic parameters comprising position, velocity, acceleration, and/or higher-order derivatives of the position with respect to time”. For example, in paragraph [0044], Rose teaches these sensors measure angular velocity; Rose further teaches accelerometers measure inclination or linear acceleration; Rose further more teaches cameras measure position of body segments or objects. For example, in paragraph [0028], Rose teaches the derivative with respect to time of the upper torso rotation angle. In paragraph [0030], Rose teaches pelvic rotational velocity; Rose further teaches the derivative with respect to time of the pelvic rotation angle. In paragraph [0038], Rose teaches mechanical motion capture systems directly track body joint angles. In paragraph [0045], Rose teaches obtaining values for specific descriptive parameters. In paragraph [0049], Rose teaches the graphical display may be on a computer monitor, a TV, a smartphone. In paragraph [0053], Rose teaches kinematic data were collected using an eight-camera optometric system for three-dimensional motion analysis at a sampling rate of 240 Hz; Rose further teaches the motion capture system. In paragraph [0054], Rose teaches swing phases were defined based on clubhead and ball kinematics. In paragraph [0060], Rose teaches all biomechanical parameters increased from easy to medium to hard swings among professional golfers. In paragraph [0061], Rose teaches the number of biomechanical factors during amateur hard swings that fell outside both one and two standard deviations of mean values for professional hard golf swings increased with handicap.
Kosowsky discloses “display a graphical user interface with options to enable creation of one or more motion trackers”. For example, in paragraph [0004], Kosowsky teaches allowing a user to set boundaries around the motion of the head. In paragraph [0057], Kosowsky teaches the boundaries are user selectable. In paragraph [0059], Kosowsky teaches the user is able to set a threshold for movement in terms of real-world distance. In Fig. 14B and paragraph [0107], Kosowsky teaches displaying a set of boundaries, the positioning of said boundaries determined by a set of selectable options chosen by the user. In Fig. 15B and paragraph [0110], Kosowsky teaches displaying a button with a share icon 1512.
Claims 2-3, 5-8, 10, and 12-42 are not allowable due to the similar reasons as discussed above.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-4, 8, 11-13, 21-33, and 35 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4, 8-10, and 18-30 of U.S. Patent No. US 11640725 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because all the limitations in claim 1 is anticipated by claim 1 of the U.S. Patent No. US 11640725 B2.
Application 18309849
Claim 1
U.S. Patent No. US 11640725 B2
Claim 1
1. A computer-implemented system configured for applying biomechanical analysis to a sequence of images of a user's movement during performance of a swing to generate computer-generated three-dimensional (3D) avatars of the swing based on user-selected kinematic parameters and sequences of motion, the system comprising:
1. A computer-implemented system configured for applying biomechanical analysis to a sequence of images of a user's movement during performance of a golf swing to generate computer-generated three-dimensional (3D) avatars of the user's golf swing based on user selected kinematic parameters and sequences of motion, the system comprising:
one or more hardware processors configured by machine-readable instructions to:
one or more hardware processors configured by machine-readable instructions to:
receive, from at least one image capture device, the images of the user's movement during the performance of the golf swing;
provide a watch list engine configured to enable a user to select via a user interface one or more golf-specific, kinematic parameters and sequences of motion during the golf swing;
provide a movement module configured to:
provide a movement module configured to
quantitatively analyze the sequence of images of the user's movement during performance of the swing to generate a quantitative measurement of the kinematic parameters and sequences of motion, and
quantitatively analyze the images to measure and track the user's golf swing in 3D space across the images, including to measure and track the user selected, golf-specific, kinematic parameters and sequences of motion, wherein the movement module is configured to identify one or more objects other than the user in the images, wherein the one or more objects includes at least a portion of a golf club, track kinematic parameters of the one or more objects in the images, and generate context data based on the tracked kinematic parameters of the one or more objects, and
identify pre-shot routines where a user swings but does not hit a ball, wherein a pre-shot routine comprises a set of actions performed by the user prior to an actual swing in which the user hits the ball;
wherein the movement module is configured to identify pre-shot routines, wherein the user swings but does not hit a ball;
provide a machine learning module configured to determine a user's standard practice routine over time and determine when a new practice routine deviates from the standard practice routine;
provide a machine learning module configured to determine a user's standard pre-shot routine over time and determine when a pre-shot routine deviates from the standard pre-shot routine;
provide a display generator configured to display visual representations of the quantitative measurement of the kinematic parameters and sequences of motion on the user device, wherein the visual representations comprise user-selected kinematic parameters with a 3D avatar of the swing, animated to cause movement of the 3D avatar displayed on the user device.
provide a 3D avatar engine configured to automatically generate a 3D avatar of the user's golf swing based on the quantitative measurement of the golf-specific, kinematic parameters and sequences of motion, where the 3D avatar engine is configured to generate the 3D avatar by tailoring component structures of the 3D avatar based on golf-specific criteria including body points that correspond with pose, movement, and/or alignment features related to golf; and
provide a display generator configured to: i) display the 3D avatar of the user's golf swing and animate the 3D avatar to cause movement of the 3D avatar based on the measured, quantified movement from the sequence of images; and ii) display the user selected kinematic parameters with the 3D avatar of the user's golf swing.
Claims 2-3
Claim 1
Claim 4
Claims 1-2
Claim 8
Claim 4
Claim 11
Claim 8
Claim 12
Claim 9
Claim 13
Claim 10
Claims 21-33
Claims 18-30
Claim 35
Claim 1
Claim Objections
Claim 29 is objected to because of the following informalities: the language “based on measured movement” in claim 29 is not correct. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8, 10, and 12-42 are rejected under 35 U.S.C. 103 as being unpatentable over Song (US 20210004981 A1) in view of Rose (US 20140257538 A1), and further in view of Kosowsky (US 20200398110 A1).
Regarding to claim 1 (Currently Amended), Song discloses a computer-implemented system configured for applying biomechanical analysis to a sequence of images of a user's movement during performance of a swing to generate computer-generated three-dimensional (3D) avatars of the swing based on user-selected kinematic parameters and sequences of motion ([0025]: perform an activity and practice yoga; Fig. 1; [0040]: method and system to perform an activity; Fig. 1; [0050]: If the activity were playing golf, the predefined reference locations include the club head and club shaft; [0061]: alter the one or more electronically captured images 108 to identify the differences between the at least one standard reference location and the at least one corresponding subject reference location in one or more electronically altered images 120; Fig. 1; [0070]: present the one or more electronically altered images 120 on the display 121 of the electronic device 107;
PNG
media_image7.png
437
360
media_image7.png
Greyscale
; [0082]: a golfer might prefer predefined reference locations such as the club head, club shaft, grip, hands, elbows, shoulders, head, hips, knees, feet, and ball; Fig. 10; [0128]: captures one or more three-dimensional images 1001 of a subject 103 performing an activity 104; Fig. 10; [0130]: superimpose a representation 1003 of the standard upon the subject 1004;
PNG
media_image8.png
845
711
media_image8.png
Greyscale
), the system comprising:
one or more hardware processors configured by machine-readable instructions to: provide a movement module configured to ([0039]: one or more processors; Fig. 1; [0041]: one or more processors):
quantitatively analyze the sequence of images of the user's movement during performance of the swing to generate a quantitative measurement of the kinematic parameters and sequences of motion (Fig. 1; [0050]: If the activity were playing golf, the predefined reference locations include the club head and club shaft; [0082]: a golfer might prefer predefined reference locations such as the club head, club shaft, grip, hands, elbows, shoulders, head, hips, knees, feet, and ball; Fig. 5; [0116]: the position and pose reference point detection engine 407 identify a plurality of subject reference locations situated at predefined features of a subject depicted performing the activity in the one or more electronically captured images 208; [0017]: identify and analyze subject reference locations in one or more electronically captured images; identify and analyze standard reference locations from one or more electronic images retrieved from a memory device; Fig. 1; [0054]: the plurality of standard reference locations 128,129,130 correspond to the plurality of subject reference locations 124,125,126 situated at predefined features of the depiction 132 of the subject 103 depicted performing the activity 104 in one or more electronically captured images 108 on a one-to-one basis; Fig. 8; [0068]: geometric alignment 803 refers to the left elbow, and indicates the subject has an elbow angle of thirty degrees;
PNG
media_image9.png
383
314
media_image9.png
Greyscale
; Fig. 10; [0131]: identify the differences between the at least one standard reference location and the at least one corresponding subject reference location in one or more electronically altered three-dimensional images 1005), and
provide a display generator configured to display visual representations of the quantitative measurement of the kinematic parameters and sequences of motion on the user device (Fig. 1; [0048]: the electronically captured multiple images 108 of the subject 103 performing the activity 104 are automatically converted to electronic data and models by processors; Fig. 8; [0068]: geometric alignment 803 refers to the left elbow, and indicates the subject has an elbow angle of thirty degrees;
PNG
media_image9.png
383
314
media_image9.png
Greyscale
; Fig. 2; [0087]: the virtual reality headset provides a three-dimensional experience to the user;
PNG
media_image10.png
179
373
media_image10.png
Greyscale
PNG
media_image11.png
310
288
media_image11.png
Greyscale
; [0108]: a video output component displays a sequence images; Fig. 10; [0130]: superimpose a representation 1003 of the standard upon the subject 1004 in the one or more three-dimensional images 1001 of a subject 103 performing an activity 104;
PNG
media_image5.png
465
706
media_image5.png
Greyscale
; Fig. 10; [0132]: display altered three-dimensional images), wherein the visual representations comprise user-selected kinematic parameters with a 3D avatar of the swing, animated to cause movement of the 3D avatar displayed on the user device ([0048]: the electronically captured images 108 of the subject 103 performing the activity 104 are automatically converted to electronic data and models by processors; Fig. 1; [0050]: If the activity were playing golf, the predefined reference locations include the club head and club shaft; Fig. 6; [0063]: cause the depiction 606 of the standard to appear superimposed atop a depiction 607 of the subject performing the activity in the electronically altered image 601; Fig. 8; [0066]: the audio data 802 instructs the subject to move a predefined feature of the subject toward a predefined standard reference location; move left elbow right!; Fig. 8; [0068]: geometric alignment 805 refers to the right knee, and indicates the subject has a knee bend of eight degrees, while the standard has a knee bend of twenty-five degrees;
PNG
media_image4.png
441
333
media_image4.png
Greyscale
; Fig. 1; [0070]: the one or more standard reference locations 128,129,130 are selected parameters; Fig. 2; [0082]: these predefined features 218 are selected as reference locations of both the subject and the standard; a golfer prefers predefined reference locations such as the club head, club shaft, grip, hands, elbows, shoulders, head, hips, knees, feet, and ball; Fig. 2; [0087]: the virtual reality headset provides a three-dimensional experience to the user;
PNG
media_image10.png
179
373
media_image10.png
Greyscale
PNG
media_image11.png
310
288
media_image11.png
Greyscale
; [0108]: a video output component displays a sequence images; Fig. 1; Fig. 10; [0130]: superimpose a representation 1003 of the standard upon the subject 1004 in the one or more three-dimensional images 1001 of a subject 103 performing an activity 104;
PNG
media_image5.png
465
706
media_image5.png
Greyscale
; the 3D avatar of piano player, golfer, and yoga player are tailored; [0137]: instruct the subject to move a predefined feature of the subject toward a predefined standard reference location to appear in the one or more electronically captured images).
Song fails to explicitly disclose:
identify pre-shot routines where the user swings but does not hit a ball, wherein a pre-shot routine comprises a set of actions performed by the user prior to an actual swing in which the user hits the ball;
provide a machine learning module configured to determine a user's standard practice routine over time and determine when a new practice routine deviates from the standard practice routine;
In same field of endeavor, Rose teaches:
identify pre-shot routines where the user swings but does not hit a ball ([0023]: optimize a golf swing in a pre-shot routines by measuring and providing feedback; [0042]: during practice exercises; Fig. 1; [0048]: the beginning of backswing, initial downswing, mid-downswing, and late downswing are pre-shot routines, but not hit a ball;
PNG
media_image1.png
471
627
media_image1.png
Greyscale
; Fig. 2A; [0058]: downswing was initiated by the reversal of pelvic rotation followed by a reversal of upper torso rotation;
PNG
media_image2.png
84
344
media_image2.png
Greyscale
; backswing, downswing and upper torso rotation are pre-shot routines, but not hit a ball) , wherein a pre-shot routine comprises a set of actions performed by the user prior to an actual swing in which the user hits the ball ([0023]: optimize a golf swing in a pre-shot routines by measuring and providing feedback; [0042]: during practice exercises; testing and analysis; [0047]: calibration procedures are customized to the specific types of sensors being used in the testing environment; the sensors are adjusted based on the initial conditions of the testing session; the initial conditions of the testing session include pre-shot routine; [0054]: the rising clubhead initiated backswing is a set of actions; in these actions, the user does not hit the ball; these actions are prior to hitting ball actions, i.e. prior to actual swing; Fig. 2A; [0058]: backswing, downswing and upper torso rotation are a set of actions; in these actions, the user does not hit a ball; these actions are prior to hitting ball actions, i.e. prior to actual swing);
provide a display generator configured to display visual representations of sequences of motion on the user device (Fig. 2A; [0049]: the display may use perspective techniques to display a three-dimensional animation, i.e. sequences of motion, such that body movements are seen from different points of view in a three-dimensional space;
PNG
media_image12.png
88
465
media_image12.png
Greyscale
; the graphical display may include videos of other subjects performing ideal or non-ideal golf swings with explanations; video and Fig. 2A display sequence of motions), wherein the visual representations comprise user-selected kinematic parameters with a 3D avatar of the swing, animated to cause movement of the 3D avatar displayed on the user device (Fig. 2A; Fig. 2B; [0049]: the graphical display includes images of an avatar performing the movement; display a three-dimensional animation such that body movements can be seen from different points of view in a three-dimensional space; the graphical display may include videos of other subjects performing ideal or non-ideal golf swings;
PNG
media_image3.png
92
434
media_image3.png
Greyscale
; video and Fig. 2A display sequence of motions).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Song to include identifying pre-shot routines where the user swings but does not hit a ball , wherein a pre-shot routine comprises a set of actions performed by the user prior to an actual swing in which the user hits the ball; provide a display generator configured to display visual representations of sequences of motion on the user device, wherein the visual representations comprise user-selected kinematic parameters with a 3D avatar of the swing, animated to cause movement of the 3D avatar displayed on the user device as taught by Rose. The motivation for doing so would have been to analyze and improve a subject's golf swing; to provide data on how the subject can improve his or her technique; to average biomechanical factors of the professional golfers' hard swings within subjects as taught by Rose in paragraphs [0011], [0046], and [0056].
Song in view of Rose fails to explicitly disclose:
provide a machine learning module configured to determine a user's standard practice routine over time and determine when a new practice routine deviates from the standard practice routine;
In same field of endeavor, Kosowsky teaches:
provide a machine learning module configured to determine a user's standard practice routine over time and determine when a new practice routine deviates from the standard practice routine ([0057]: allow the user to apply statistical analysis to compare their motions to the motions of other individuals or to the aggregated motions of a set of other individuals; [0101]: the golfer may want to see how many times their head or part of the body stayed within the boundaries for swings of a particular type of club each of the preceding sessions; [0102]: the user compares the visualizations of their good swings with their bad swings to help in their training; Fig. 10; [0103]: using machine learning algorithms, models 1003 or 1004 are computed that will predict the qualitative result—good or bad 1005—or the quantitative result—expected distance 1006 the golf ball will travel—given the series of timestamps and target positions 1001 or 1002; machine learning algorithms determine a golf swing is good, i.e., standard practice routine; machine learning algorithms determine a golf swing is bad, i.e., deviates from the standard practice routine; Fig. 11; [0104]: apply machine learning to predict the result of a swing from the series of target positions 1001 and 1002).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Song in view of Rose to include provide a machine learning module configured to determine a user's standard practice routine over time and determine when a new practice routine deviates from the standard practice routine as taught by Kosowsky. The motivation for doing so would have been to improve the golfer's golf swing; to provide a system for alerting a user when the movement of a target exceeds a threshold as measured in distances in the real, physical space of the user; to determine a golf swing is good or bad using machine learning algorithms as taught by Kosowsky in Fig. 1, and paragraphs [0058-0059], and [0103-0104].
Regarding to claim 2 (Original), Song in view of Rose and Kosowsky discloses the computer-implemented system of claim 1, wherein the swing is a golf swing (Rose; [0023]: optimize a golf swing by measuring and providing feedback; Fig. 1; [0048]: the beginning of backswing, initial downswing, mid-downswing, and late downswing are pre-shot routines, but not hit a ball;
PNG
media_image1.png
471
627
media_image1.png
Greyscale
; Fig. 1; [0054]: analyze Golf swings using in-house algorithms). Same motivation of claim 1 is applied here.
Regarding to claim 3 (Previously Presented), Song in view of Rose and Kosowsky discloses the computer-implemented system of claim 1, wherein a practice swing is a pre-shot routine in golf (Rose; [0023]: optimize a golf swing in a training by measuring and assessing these parameters for a subject; provide feedback that are utilized for training; [0042]: training tools are used in practice exercises, i.e., during practice exercises; Fig. 1; [0048]: the beginning of backswing, initial downswing, mid-downswing, and late downswing are pre-shot routines, but not hit a ball;
PNG
media_image1.png
471
627
media_image1.png
Greyscale
; [0053]: a plastic practice ball was wrapped in light-reflective tape and placed on a synthetic grass mat; each subject performed three swings of different efforts; Fig. 1; [0054]: the initiation of downswing was defined by the transition of the clubhead direction at the top of backswing).
Same motivation of claim 1 is applied here.
Regarding to claim 4 (Currently Amended), Song in view of Rose and Kosowsky discloses the computer-implemented system of claim 1, wherein the one or more hardware processors are further configured to (same as rejected in claim 1):
receive, from at least one image capture device, the sequence of images of the user's movement during the performance of the swing (Song; [0025]: these identified subject reference locations are mapped to the electronically captured images and stored in the metadata of these images; Fig. 1; [0042]: the electronic device 107 receives captured images; an image capture device electronically captures one or more electronically captured images 108 of the subject 103 performing the activity 104; [0075]: a stereoscopic camera 202 captures three dimensional images of the user performing the activity; Fig. 10; [0128]: a stereoscopic camera 202 captures one or more three-dimensional images 1001 of a subject 103 performing an activity 104; Fig. 10; [0130]: receive the captured images); and
provide a watch list engine configured to enable a user to select via a user interface one or more golf-specific, kinematic parameters and sequences of motion during the swing (Kosowsky; Fig. 2C; [0072]: the head 2c02 of the golfer 2c01 and the knee 2c04 of the golfer 2c01 are selected and tracked; the golfer is alerted when either the head 2c02 moves outside the boundaries depicted as rectangle 2c03, or the knee 2c04 moves outside the boundaries depicted as rectangle 2c05; generate various alerts for various sets of combinations of targets crossing boundaries; Fig. 17; Fig. 18; [0119]: determine the most salient fault affecting a user's performance across a set of activities; this sorted list is presented to the user to inform them of the most important factors that affected their performance in the session) and to correlate results of the watch list engine with additional context data to affect what information is presented to the user (Kosowsky; Fig. 2C; [0072]: the head 2c02 of the golfer 2c01 and the knee 2c04 of the golfer 2c01 are selected and tracked; the golfer is alerted when either the head 2c02 moves outside the boundaries depicted as rectangle 2c03, or the knee 2c04 moves outside the boundaries depicted as rectangle 2c05; generate various alerts for various sets of combinations of targets crossing boundaries; Fig. 17; Fig. 18; [0119]: determine the most salient fault affecting a user's performance across a set of activities; this sorted list is presented to the user to inform them of the most important factors that affected their performance in the session), wherein additional context data comprises non-kinematic information that is used to interpret performance and tailor feedback, visuals, or recommendations presented to the user (or is optional; Kosowsky; Fig. 17; Fig. 18; [0119]: determine the most salient fault affecting a user's performance across a set of activities; this sorted list is presented to the user to inform them of the most important factors that affected their performance in the session; suggest the user try to alter his motion to eliminate the most severe fault; provide feedback to the user as to the user's progress in addressing the focus or multiple foci over multiple performances of the activity; motion and focus are non-kinematic information); and
alter visual representations based on the correlated results of the watch engine list with additional context data (Kosowsky; [0130]: if the target is not found within the desired limits in the image, then the user is told, at step 2008, to either move back from or forward towards the