DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment, filed 12/30/2025, has been entered. The examiner notes claims 1-11, 13-15, 17-18, and 20-23 are pending.
Response to Arguments
Applicant’s arguments, see Remarks page 9, filed 12/30/2025, with respect to the objection to the specification has been fully considered and are persuasive. The applicant has amended the specification to overcome the objection. The objection to the specification has been withdrawn.
Applicant’s arguments, see Remarks pages 9-10, filed 12/30/2025, with respect to the 35 USC 112 rejection of claim 10 has been fully considered and are persuasive. The applicant has amended the claim to overcome the 35 USC 112 rejection. The 35 USC 112 rejection of claim 10 has been withdrawn.
Applicant’s arguments, see Remarks pages 10-13, filed 12/30/2025, with respect to the 35 USC 101 rejection of claims 1-11, 13-15, 17-18, and 20 have been fully considered and are persuasive. The examiner agrees that the abstract idea of “determining” a tension or isometric force from images cannot reasonably be performed in the human mind. Thus, the claims are found to be eligible subject matter at step 2A Prong 1 of the Alice/Mayo test. The 35 USC 101 rejection of claims 1-11, 13-15, 17-18, and 20 has been withdrawn.
Applicant’s arguments, see Remarks pages 13-14, filed 12/30/2025, with respect to the rejection(s) of claim(s) 1-11, 13-15, 17-18, and 20 under 35 USC 102 and 103 have been fully considered and are persuasive. The examiner agrees with the applicant that the prior art of record from the previous office action fails to teach the kinematic measurements are specifically tension or isometric force. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Nakamura (US 20230031291 A1).
Claim Objections
Claim 1 is objected to because of the following informalities:
Claim 1 line 1 should recite “…movements of a moveable…”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 7-8, 10-11, 13-15, 17-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson (US 20150099252 A1) in view of Nakamura (US 20230031291 A1).
Regarding claim 1, Anderson teaches a system for analyzing one or more movements of moveable entity, the system comprising:
a processor [Fig. 1 Item 102] in communication with a camera [Fig. 2 Item 220 “video capture module”], see also para. 0037 “…the movement training system 200 may implement the computer system 100 of FIG. 1”], wherein the processor is configured to:
determine one or more skeletal locations of the moveable entity performing a movement based on a plurality of images captured by the camera [0040 “The movement training application 210 may create articulable skeletal representations, also referred to herein as skeletal representations, of the movement data of the author and trainee, thereby creating "stick figures”,”];
determine one or more kinematic measurements of the one or more skeletal locations of the moveable entity based on the plurality of images [0040 “motion information”];
compare the one or more kinematic measurements of the moveable entity with an ideal kinematic movement for the moveable entity [0040 “The movement training application 210 compares motion information of the author with motion information of the trainee to assess the trainee's progress and compute a progress score”]; and
provide an analysis of the moveable entity based on the comparing of the one or more kinematic measurements of the moveable entity with the ideal kinematic movement for the moveable entity [0040 “The movement training application 210 compares motion information of the author with motion information of the trainee to assess the trainee's progress and compute a progress score”], the analysis provided on a display while the moveable entity is performing the movement [0054 “During the record process, the video image 412 and the skeletal movement representation 413 reflect live capture of the image and motion tracking data of the author. After recording stops, the video image 412 and the skeletal movement representation 413 reflect the previously recorded movement”, see also Fig. 4A-B].
Anderson teaches determining one or more kinematic measurements, but fails to specifically teach the kinematic measurements comprise a tension or isometric force.
Nakamura teaches the kinematic measurements comprise a tension or isometric force [0130 “…the motion analysis unit estimates the muscle tension and muscle activity of the subject during walking, visualizes and displays them on the display”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Anderson and incorporate the teachings of Nakamura to include the kinematic measurements comprise tension or isometric force. Doing so configures the system to assess the subject at a particular bodily region, providing for a more detailed understanding of muscle strength at different angles, which is crucial for identifying weaknesses or imbalances that may not be evident in traditional dynamic strength assessments.
Regarding claim 2, Anderson and Nakamura teach the system as claimed in claim 1, wherein the ideal kinematic movement is based on at least one of a size, a weight [Anderson 0066 “…skeletal representations may be compared even when the user 350 has a significantly different age, height or weight from the author of the stored movement”], an age [Anderson 0066], a gender, a limb length [Anderson 0066 “The movement training system 200 scales the captured skeletal representation by dynamically resizing each bone in the captured skeletal representation to match the size of the corresponding bone in the skeletal representation of a movement in the movement database 242”], a limb diameter, a limb density, a body diameter, a body density, and a skeletal makeup of the moveable entity [Anderson Fig. 4A Item 413].
Regarding claim 3, Anderson and Nakamura teach the system as claimed in claim 1, wherein the processor is further configured to:
determine one or more location points of an object associated with the moveable entity while performing the movement [Anderson 0046 “The motion tracking unit 320 reports the 3D coordinates of various key positions, such as joint positions, of the user 350”]; and
determine the one or more kinematic measurements of the object based on the one or more location points [Anderson 0069 “Various possible features may be used to create the feature vectors, including, without limitation, average joint velocity, individual joint velocities, starting and final joint positions, and starting and final bone orientations.”].
Regarding claim 4, Anderson and Nakamura teach the system as claimed in claim 1, wherein the processor is further configured to:
determine a first location of each skeletal location of the moveable entity [Anderson 0069 “…starting and final joint positions”];
determine one or more subsequent locations of each skeletal location while the moveable entity moves at intervals [Anderson 0064 “The motion tracking unit 320 provides updated joint positions at multiple times per second…”];
receive the plurality of images of the moveable entity captured by the camera at each interval [Anderson Fig. 4B]; and
determine the one or more kinematic measurements at the first location and subsequent locations for a selected image at each interval [Anderson 0070 “The movement training system 200 may compare the entire captured movement with the entire stored movement”].
Regarding claim 5, Anderson teaches the system as claimed in claim 1, wherein the display [Anderson Fig. 4A Item 412] is configured to display the one or more kinematic measurements during or after the moveable entity performs the movement [Anderson 0054 “During the record process, the video image 412 and the skeletal movement representation 413 reflect live capture of the image and motion tracking data of the author. After recording stops, the video image 412 and the skeletal movement representation 413 reflect the previously recorded movement”, see also Anderson Fig. 4A-B].
Regarding claim 7, Anderson and Nakamura teach system as claimed in claim 3, wherein the processor is further configured to:
determine a first object location of the object that is associated with the moveable entity [Anderson 0046 “The motion tracking unit 320 reports the 3D coordinates of various key positions, such as joint positions, of the user 350”];
determine one or more subsequent object locations of the object at intervals intervals [Anderson 0064 “The motion tracking unit 320 provides updated joint positions at multiple times per second…”];
receive the plurality of images of the object captured by the camera at each interval [Anderson Fig. 4B]; and
determine the one or more kinematic measurements at the first object location and subsequent object locations for a selected image at each interval [Anderson 0069 “Various possible features may be used to create the feature vectors, including, without limitation, average joint velocity, individual joint velocities, starting and final joint positions, and starting and final bone orientations,”].
Regarding claim 8, Anderson teaches the system as claimed in claim 7, further comprising the display [Anderson Fig. 4A Item 412] that displays the one or more kinematic measurements of the object during or after the moveable entity is performing the movement [Anderson 0054 “During the record process, the video image 412 and the skeletal movement representation 413 reflect live capture of the image and motion tracking data of the author. After recording stops, the video image 412 and the skeletal movement representation 413 reflect the previously recorded movement”, see also Anderson Fig. 4A-B].
Regarding claim 10, Anderson and Nakamura teach the system as claimed in claim 1, wherein the processor is further configured to:
receive an input from the moveable entity based on the analysis, wherein the input comprises at least one of a voice message, verbal message, email message, button [Anderson 0051 “…the user-interface for the movement training environment 300 may include two button types”], QR code, gesture signal [Anderson 0051 “The fixed offset positioning of the quick-access contextual buttons allows these buttons to be activated by a `gesture posture,` such as a sweep of the hand in a particular direction”], or a combination thereof.
Regarding claim 11, Anderson and Nakamura teach the system as claimed in claim 1, wherein the processor is further configured to provide a trend based on the analysis of two or more movements of the moveable entity [0040 “The movement training application 210 compares motion information of the author with motion information of the trainee to assess the trainee's progress and compute a progress score”];
wherein the trend comprises a rating [0040 “progress score”] based on the one or more kinematic measurements [Anderson 0040 ““The movement training application 210 compares motion information of the author with motion information of the trainee to assess the trainee's progress and compute a progress score”].
Regarding claim 13, Anderson teaches a method for analyzing one or more movements of a moveable entity, the method comprising:
determining, by a processor [Fig. 1 Item 102] in communication with a camera [Fig. 2 Item 220 “video capture module”, see also para. 0037 “…the movement training system 200 may implement the computer system 100 of FIG. 1”], one or more skeletal locations of the moveable entity performing a movement based on a plurality of images captured by the camera [0040 “The movement training application 210 may create articulable skeletal representations, also referred to herein as skeletal representations, of the movement data of the author and trainee, thereby creating "stick figures”,”];
determining, by the processor, one or more kinematic measurements of the one or more skeletal locations of the moveable entity based on the plurality of images [0040 “motion information”];
comparing, by the processor, the one or more kinematic measurements of the moveable entity with an ideal kinematic movement for the moveable entity [0040 “The movement training application 210 compares motion information of the author with motion information of the trainee to assess the trainee's progress and compute a progress score”]; and
providing, by the processor, an analysis of the moveable entity based on the comparing of the one or more kinematic measurements of the moveable entity with the ideal kinematic movement for the moveable entity [0040 “The movement training application 210 compares motion information of the author with motion information of the trainee to assess the trainee's progress and compute a progress score”], the analysis provided on a display while the moveable entity is performing the movement [0054 “During the record process, the video image 412 and the skeletal movement representation 413 reflect live capture of the image and motion tracking data of the author. After recording stops, the video image 412 and the skeletal movement representation 413 reflect the previously recorded movement”, see also Fig. 4A-B].
Anderson teaches determining one or more kinematic measurements, but fails to specifically teach the kinematic measurements comprise a tension or isometric force.
Nakamura teaches the kinematic measurements comprise a tension or isometric force [0130 “…the motion analysis unit estimates the muscle tension and muscle activity of the subject during walking, visualizes and displays them on the display”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Anderson and incorporate the teachings of Nakamura to include the kinematic measurements comprise tension or isometric force. Doing so configures the system to assess the subject at a particular bodily region, providing for a more detailed understanding of muscle strength at different angles, which is crucial for identifying weaknesses or imbalances that may not be evident in traditional dynamic strength assessments.
Regarding claim 14, Anderson and Nakamura teach the method as claimed in claim 13, wherein the ideal kinematic movement is based on at least one of a size of the moveable entity, a weight of the moveable entity [Anderson 0066 “…skeletal representations may be compared even when the user 350 has a significantly different age, height or weight from the author of the stored movement”], an age of the moveable entity [Anderson 0066], a gender of the moveable entity, and a skeletal makeup of the moveable entity [Anderson Fig. 4A Item 413].
Regarding claim 15, Anderson and Nakamura teach the method as claimed in claim 13, further comprising:
determining one or more location points of an object that is associated with the moveable entity while performing the movement [Anderson 0046 “The motion tracking unit 320 reports the 3D coordinates of various key positions, such as joint positions, of the user 350”];
determining the one or more kinematic measurements of the object based on the one or more location points [Anderson 0069 “Various possible features may be used to create the feature vectors, including, without limitation, average joint velocity, individual joint velocities, starting and final joint positions, and starting and final bone orientations.”],
determining a first location of each skeletal location of the moveable entity [Anderson 0069 “…starting and final joint positions”];
determining one or more subsequent locations of each skeletal location while the moveable entity moves at intervals [Anderson 0064 “The motion tracking unit 320 provides updated joint positions at multiple times per second…”];
receiving the plurality of images of the moveable entity captured by the camera at each interval [Anderson Fig. 4B]; and
determining the one or more kinematic measurements at the first location and subsequent locations for a selected image at each interval [Anderson 0070 “The movement training system 200 may compare the entire captured movement with the entire stored movement”].
Regarding claim 17, Anderson and Nakamura teach the method as claimed in claim 15, further comprising displaying the one or more kinematic measurements at least while the moveable entity is performing the movement or after the moveable entity has performed the movement [Anderson 0054 “During the record process, the video image 412 and the skeletal movement representation 413 reflect live capture of the image and motion tracking data of the author. After recording stops, the video image 412 and the skeletal movement representation 413 reflect the previously recorded movement”, see also Anderson Fig. 4A-B].
Regarding claim 18, Anderson and Nakamura teach the method as claimed in claim 15, further comprising:
determining a first object location of the object that is associated with the moveable entity [Anderson 0046 “The motion tracking unit 320 reports the 3D coordinates of various key positions, such as joint positions, of the user 350”];
determining one or more subsequent locations of the object at intervals [Anderson 0064 “The motion tracking unit 320 provides updated joint positions at multiple times per second…”];
receiving the plurality of images of the object captured by the camera at each interval [Anderson Fig. 4B]; and
determining the one or more kinematic measurements at the first object location and subsequent object locations for a selected image at each interval [Anderson 0069 “Various possible features may be used to create the feature vectors, including, without limitation, average joint velocity, individual joint velocities, starting and final joint positions, and starting and final bone orientations,”]; and
displaying, by a display [Anderson Fig. 4A Item 412], the one or more kinematic measurements of the object at least while the moveable entity is performing the movement or after the moveable entity has performed the movement [Anderson 0054 “During the record process, the video image 412 and the skeletal movement representation 413 reflect live capture of the image and motion tracking data of the author. After recording stops, the video image 412 and the skeletal movement representation 413 reflect the previously recorded movement”, see also Anderson Fig. 4A-B].
Regarding claim 20, Anderson teaches one or more non-transitory computer-readable storage mediums storing one or more sequences of instructions [Fig. 1 Item 104], which when executed by one or more processors [Fig. 1 Item 102], causes the one or more processors to perform one or more steps of:
determining one or more skeletal locations of a moveable entity performing a movement based on a plurality of images [0040 “The movement training application 210 may create articulable skeletal representations, also referred to herein as skeletal representations, of the movement data of the author and trainee, thereby creating "stick figures”,”] captured by a camera in communication with the one or more processors [Fig. 2 Item 220 “video capture module”], see also para. 0037 “…the movement training system 200 may implement the computer system 100 of FIG. 1”];
determining one or more kinematic measurements of the one or more skeletal locations of the moveable entity based on the plurality of images [0040 “motion information”];
determining one or more location points of an object that is associated with the moveable entity while performing the movement [0046 “The motion tracking unit 320 reports the 3D coordinates of various key positions, such as joint positions, of the user 350”];
determining the one or more kinematic measurements of the object based on the one or more location points [0069 “Various possible features may be used to create the feature vectors, including, without limitation, average joint velocity, individual joint velocities, starting and final joint positions, and starting and final bone orientations,”];
comparing the one or more kinematic measurements of the moveable entity and the object with an ideal kinematic movement [0040 “The movement training application 210 compares motion information of the author with motion information of the trainee to assess the trainee's progress and compute a progress score”]; and
providing an analysis of the moveable entity based on the comparing of the one or more kinematic measurements of the moveable entity with the ideal kinematic movement for the moveable entity [0040 “The movement training application 210 compares motion information of the author with motion information of the trainee to assess the trainee's progress and compute a progress score”] , the analysis provided on a display while the moveable entity is performing the movement [0054 “During the record process, the video image 412 and the skeletal movement representation 413 reflect live capture of the image and motion tracking data of the author. After recording stops, the video image 412 and the skeletal movement representation 413 reflect the previously recorded movement”, see also Fig. 4A-B].
Anderson teaches determining one or more kinematic measurements, but fails to specifically teach the kinematic measurements comprise a tension or isometric force.
Nakamura teaches the kinematic measurements comprise a tension or isometric force [0130 “…the motion analysis unit estimates the muscle tension and muscle activity of the subject during walking, visualizes and displays them on the display”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Anderson and incorporate the teachings of Nakamura to include the kinematic measurements comprise tension or isometric force. Doing so configures the system to assess the subject at a particular bodily region, providing for a more detailed understanding of muscle strength at different angles, which is crucial for identifying weaknesses or imbalances that may not be evident in traditional dynamic strength assessments.
Claims 6 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson and Nakamura as applied to claim 1 above, and further in view of Tran (US 9610476 B1).
Regarding claim 6, Anderson and Nakamura teach the system as claimed in claim 1, wherein kinematic features are determined [Anderson 0040], but fail to teach the kinematic features are further determined based on one or more sensors that track movement of the camera.
Tran teaches the kinematic features are further determined based on one or more sensors that track movement of the camera [col. 18 lns. 60-66, the accelerometer/gyrometer/magnetometer integrated into the handle aid in the motion tracking of the camera that is also integrated into the handle].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Anderson and Nakamura and incorporate the teachings of Tran to include the kinematic features are further determined based on one or more sensors that track movement of the camera. Doing so configures the system to analyze the complex movements of an athlete, track performance, improve injury prevention and rehabilitation, and create realistic animations by positioning and tracking a camera that is located on an object commonly used by the athlete, as recognized by Tran.
Regarding claim 9, Anderson and Nakamura teach the system as claimed in claim 1, wherein the system comprises sensors [Anderson 0026], but fails to explicitly teach one or more sensors measure motion of the moveable entity; and wherein the one or more kinematic measurements are further based on the one or more sensors.
Tran teaches one or more sensors measure motion of the moveable entity [col. 18 lns. 60-66]; and
wherein the one or more kinematic measurements are further based on the one or more sensors [col. 19 lns. 6-11].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Anderson and Nakamura and incorporate the teachings of Tran to include one or more sensors measure motion of the moveable entity; and wherein the one or more kinematic measurements are further based on the one or more sensors. Doing so configures the system with a low-cost, portable, and unobtrusive motion analysis in real-world settings, allowing for objective quantification of movement that is not restricted to a lab.
Claims 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson and Nakamura as applied to claim 3 above, and further in view of Marty (US 20080182685 A1).
Regarding claim 21, Anderson and Nakamura teach the system of claim 3, wherein Anderson further teaches the object, but fails to teach the object is a projectile that is detached from the moveable entity.
Marty teaches the object is a projectile that is detached from the moveable entity [0031 “ball”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Anderson and Nakamura and incorporate the teachings of Marty to include the object is a projectile that is detached from the moveable entity. Doing so configures the system to detect trajectory and provide feedback for an associate exercise, as recognized by Marty para. 0004].
Regarding claim 22, Anderson, Nakamura, and Marty teach the system of claim 21, wherein the processor is further configured to determine velocity [Marty 0031 “…a calculated or actual velocity as the ball leaves the ground or tee”], acceleration [Marty 0045 “…specific ball acceleration result…”], and angular momentum of the projectile [Marty 0090 “…the initial spin of the ball can be determined by calculating the impulse of angular momentum that this interaction generates”]; and
wherein a distance that the projectile could travel is determined based on the velocity [Marty 0031 “…a calculated straight-line distance of a shot…”].
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson and Nakamura as applied to claim 1 above, and further in view of Du (X. Du., et al., "Bio-LSTM: A Biomechanically Inspired Recurrent Neural Network for 3-D Pedestrian Pose and Gait Prediction," in IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1501-1508, April 2019 [retrieved on 03-20-2026]. Retrieved from the Internet <URL: https://ieeexplore.ieee.org/abstract/document/8626436/authors#authors > < DOI: 10.1109/LRA.2019.2895266 >.
Regarding claim 23, Anderson and Nakamura teach the system of claim 1, wherein the processor is further configured to fit a model to one or more skeletal locations [Anderson Fig. 4A Item 413], but fails to specifically teach the locations are fit to a parametric articulated body model to the one or more skeletal locations; and wherein the parametric articulated body model comprises a skinned multi-person linear body model.
Du teaches the locations are fit to a parametric articulated body model to the one or more skeletal locations [Du Fig. 1, see also page 1502 section II. B. “…the output is a full-body 3D mesh in addition to traditional skeleton-based 3D joint locations [28], [34]; and 3) it is a parametric statistical model that can easily represent the location, pose, and shape of a person by a vector of parameters”]; and
wherein the parametric articulated body model comprises a skinned multi-person linear body model [Du Fig. 1, see also page 1502 section II. B. “…we represent the 3D human pose using the Skinned Multi-Person Linear (SMPL) model”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take the teachings of Anderson and Nakamura and incorporate the teachings of Du to include the locations are fit to a parametric articulated body model to the one or more skeletal locations; and wherein the parametric articulated body model comprises a skinned multi-person linear body model. Doing so configures the system to utilize a model that “…can represent varying human-body shapes and poses accurately and realistically”, as recognized by Du page 1502 section II. B.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M HANEY whose telephone number is (571)272-0985. The examiner can normally be reached Monday through Friday, 0730-1630 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Valvis can be reached at (571)272-4233. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN M HANEY/Examiner, Art Unit 3791
/JUSTIN XU/Primary Examiner, Art Unit 3791