Prosecution Insights
Last updated: April 19, 2026
Application No. 17/768,019

METHOD, DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR ESTIMATING INFORMATION ON GOLF SWING

Final Rejection §103§112
Filed
Apr 11, 2022
Examiner
RUSH, ERIC
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Moais Inc.
OA Round
4 (Final)
61%
Grant Probability
Moderate
5-6
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
383 granted / 628 resolved
-1.0% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to the amendments and remarks received 30 October 2025. Claims 1 and 5 - 18 are currently pending. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. The rejections to claims 1 and 5 - 14 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, are hereby withdrawn in view of the amendments and remarks received 30 October 2025. Response to Arguments Applicant’s arguments with respect to claim(s) 1 and 5 - 18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's arguments filed 30 October 2025 have been fully considered but they are not persuasive. On page 12 of the remarks the Applicant’s Representative argues that the previously cited prior art references “do not disclose, teach or suggest at least ‘wherein the user's golf swing and the comparison target's golf swing are constituted by a plurality of stages of partial motions, respectively, and similarity between the user's golf swing and the comparison target's golf swing is identified in consideration of the plurality of stages of partial motions,’”. In particular, the Applicant’s Representative argues that Zhang et al. do “not teach or suggest ‘similarity between the user's golf swing and the comparison target's golf swing is identified in consideration of the plurality of stages of partial motions,’ as recited in amended independent claim 1” at least because Zhang et al. describe that “the video may be considered as a sequence of images” and that “the similarity between the pose of the player in each image frame and the standard pose is identified without considering a plurality of stages of partial motions, such as an address, a takeaway, a back swing, a top-of-swing, a down swing, an impact, a follow-through, and a finish.” The Examiner respectfully disagrees. Initially, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “considering a plurality of stages of partial motions, such as an address, a takeaway, a back swing, a top-of-swing, a down swing, an impact, a follow-through, and a finish” (emphasis added)) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Furthermore, the Examiner asserts that Zhang et al. disclose, at least, “wherein the user's golf swing and the comparison target's golf swing are constituted by a plurality of stages of partial motions, respectively, and similarity between the user's golf swing and the comparison target's golf swing is identified in consideration of the plurality of stages of partial motions”, see at least figures 18A and 18B, page 1 paragraphs 0014 - 0017, page 7 paragraph 0125, page 9 paragraphs 0159 - 0162, page 15 paragraphs 0257 - 0261 and 0265 - 0271, page 16 paragraph 0280 - page 17 paragraph 0291 and page 18 paragraph 0316 - page 19 paragraph 0319 of Zhang et al. wherein they disclose that “the detecting of the poses inconsistency includes obtaining bone node vectors of the object and a second object, respectively; determining a degree of a pose similarity between the bone node vectors of the object and the second object; and detecting the pose inconsistency between the object and the second object based on the degree of the pose similarity between the bone node vectors of the object and the second object” [0017], that “the performing of pose consistency detection with respect to the object and the associated object may include acquiring bone node vectors of the object and the associated object in a set 3D space, determining similarity between the bone node vectors corresponding to the object and the associated object, determining, based on the similarity between the bone node vectors, pose similarity between the object and the associated object, and detecting, according to the pose similarity, pose consistency between the object and the associated object” [0258], that “in the case of a video segment to which the image belongs and according to a pose consistency detection result between the object and the associated object in each image in the video segment, pose consistency detection is performed with respect to the object and the associated object” [0259], that “video may be considered as a sequence of images. Therefore, the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose. In addition, the system may extract key frames in the video according to algorithms related to extraction of key frames of the video. The system may assign great weights to these key frames and perform weighted averaging on the consistency scores of all frames to obtain the final evaluation score of the pose of the player” [0286], that “when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons… Thus, image frames in the standard video, which correspond to all image frames in the video taken by the user, are determined. The similarity between skeletons is calculated frame by frame, and then the pose of the user is scored” [0288], that “the user obtains images of a user's pose that may be similar to the obtained standard pose. In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose” [0290], that “based on the comparison between the skeleton information of the standard pose and the skeleton information of the user pose, the degree of user pose accuracy may be evaluated and scored. In an embodiment, as described referring to FIG. 18A, the length and/or the angle made using nodes are detected and compared with a predetermined threshold value. The comparison result may represent whether the user pose is close to the standard pose and how much close the user pose is” [0291] and that the “second processor 2202 may perform, based on the skeleton information of the object and skeleton information of an associated object associated with the object, pose consistency detection with respect to the object and the associated object. The second processor 2202 may acquire bone node vectors of the object and bone nodes of the associated object, determine similarity between the bone node vectors corresponding to the object and the associated object, determine, based on the similarity between the bone node vectors, pose similarity between the object and the associated object, and detect, according to the pose similarity, pose consistency between the object and the associated object” [0318]. The Examiner asserts that, as shown herein above and in the cited portions, Zhang et al. disclose performing pose consistency detection between a user pose and a standard pose, wherein the user and standard poses may be golf swing poses of a user and golf swing poses of a professional, that key frames of video of the user’s golf swing and video of the standard golf swing may be determined, that the key frames may correspond to frames of the beginning, ending and middle key gestures of the videos, that the similarity between the pose of the player and the standard pose in each image frame may be identified to determine consistency scores between the poses for the image frames, that great weights may be assigned to the consistency scores of the image frames of the key frames when obtaining a final evaluation score of the pose consistency, that the start and end points of the two poses, the user’s golf swing and the standard golf swing, are synchronized when performing pose consistency detection and that pose consistency may be detected according to the pose similarity between corresponding image frames of the user and standard poses. The Examiner asserts that, at least, the process disclosed by Zhang et al. of determining the key frames that correspond to the beginning and ending key gestures of the videos of the user’s and standard golf swings, synchronizing the starting and end points of the user’s and standard golf swings when comparing the golf swings, calculating the similarity between the skeletons in each image frame of the synchronized videos to obtain consistency scores and obtaining a final evaluation score of the pose consistency by performing weighted averaging of the consistency scores with great weights assigned to the key frames corresponds to the aforementioned disputed claim limitation(s). In addition, the Examiner asserts that, for example, the key frames corresponding to the beginning and ending key gestures that are determined by Zhang et al. correspond to address and finish stages, respectively, of a golf swing. Furthermore, the Examiner asserts that Fig. 18A of Zhang et al. explicitly illustrates that a user’s golf swing and a comparison target’s golf swing are constituted by a plurality of stages of partial motions. Therefore, the Examiner asserts that, at least, Zhang et al. disclose the aforementioned disputed claim limitation(s). On pages 12 - 13 of the remarks the Applicant’s Representative argues that the previously cited prior art references “do not disclose, teach or suggest at least ‘extracting at least one frame corresponding to a predetermined stage among the plurality of stages of the user's golf swing from photographed images of the user's golf swing; determining a comparison point for the extracted at least one frame corresponding to the predetermined stage of the user's golf swing, wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds,’”. In particular, the Applicant’s Representative argues that Zhang et al. do not disclose, teach or suggest “extracting at least one frame corresponding to a predetermined stage among the plurality of stages of the user's golf swing from photographed images of the user's golf swing” or “determining a comparison point for the extracted at least one frame corresponding to the predetermined stage”. The Examiner respectfully disagrees, in part. The Examiner asserts that Zhang et al. disclose “extracting at least one frame corresponding to a predetermined stage among the plurality of stages of the user's golf swing from photographed images of the user's golf swing” and “determining a comparison point for the extracted at least one frame corresponding to the predetermined stage of the user's golf swing”, see at least figures 9, 10, 16A, 16B, 18A and 18B, page 1 paragraphs 0014 - 0017, page 7 paragraph 0125, page 9 paragraphs 0159 - 0162, page 15 paragraphs 0256 - 0261 and 0265 - 0271, page 16 paragraph 0280 - page 17 paragraph 0291 and page 18 paragraph 0316 - page 19 paragraph 0319 of Zhang et al. wherein they disclose that “the detecting of the poses inconsistency includes obtaining bone node vectors of the object and a second object, respectively; determining a degree of a pose similarity between the bone node vectors of the object and the second object; and detecting the pose inconsistency between the object and the second object based on the degree of the pose similarity between the bone node vectors of the object and the second object” [0017], that “skeleton information of the object is generated based on the detected key point information. The detected skeleton information of the object may include bone node information and/or bone node vector information of the object” [0125], that “the performing of pose consistency detection with respect to the object and the associated object may include acquiring bone node vectors of the object and the associated object in a set 3D space, determining similarity between the bone node vectors corresponding to the object and the associated object, determining, based on the similarity between the bone node vectors, pose similarity between the object and the associated object, and detecting, according to the pose similarity, pose consistency between the object and the associated object” [0258], that “in the case of a video segment to which the image belongs and according to a pose consistency detection result between the object and the associated object in each image in the video segment, pose consistency detection is performed with respect to the object and the associated object” [0259], that if “a player wants to evaluate his/her sport pose 1810 or acquire adjustment advice, the system may perform pose estimation with respect to the player based on the image to obtain skeleton information. Next, the system may perform pose consistency detection with respect to the player and the object in a standard pose based on the skeleton information of the player and the object in a standard pose” [0284], that “video may be considered as a sequence of images. Therefore, the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose. In addition, the system may extract key frames in the video according to algorithms related to extraction of key frames of the video. The system may assign great weights to these key frames and perform weighted averaging on the consistency scores of all frames to obtain the final evaluation score of the pose of the player” [0286], that “if the number of image frames in which the player is playing golf is n, the system may perform pose estimation with respect to each frame and may respectively perform pose consistency evaluation between the pose of the player and the standard pose to obtain a sequence of scores” [0287], that “when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons” [0288], that the “standard pose may be, for example, a golf swing pose shown in FIG. 18A” [0289], that “the user obtains images of a user's pose that may be similar to the obtained standard pose. In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose” [0290] and that “based on the comparison between the skeleton information of the standard pose and the skeleton information of the user pose, the degree of user pose accuracy may be evaluated and scored. In an embodiment, as described referring to FIG. 18A, the length and/or the angle made using nodes are detected and compared with a predetermined threshold value. The comparison result may represent whether the user pose is close to the standard pose and how much close the user pose is” [0291]. The Examiner asserts that, as shown herein above and in the cited portions, Zhang et al. disclose performing pose consistency detection between a user pose and a standard pose, wherein the user and standard poses may be golf swing poses of a user and golf swing poses of a professional, that performing pose consistency detection comprises comparing skeleton information of the standard pose with skeleton information of the user pose, that skeleton information is generated based on detected key point information and includes bone node information and bone node vector information, that bone node information includes bone node position information, that pose consistency detection may be performed on each image frame of a sequence of image frames of a video, that key frames of video of the user’s golf swing and video of the standard golf swing may be determined, that the key frames may correspond to frames of the beginning, ending and middle key gestures of the videos, that the start and end points of the two poses, the user’s golf swing and the standard golf swing, are synchronized when performing pose consistency detection and that the length and/or the angle made using nodes are detected and compared with a predetermined threshold value when performing pose consistency detection. The Examiner asserts that, at least, the process disclosed by Zhang et al. of determining the key frames that correspond to the beginning and ending key gestures of the videos of the user’s and standard golf swings and synchronizing the starting and end points of the user’s and standard golf swings when comparing the golf swings corresponds to the claimed “extracting at least one frame corresponding to a predetermined stage among the plurality of stages of the user's golf swing from photographed images of the user's golf swing”. Furthermore, the Examiner asserts that the key frame corresponding to the beginning key gesture determined by Zhang et al. and/or the start point of the video of the user’s pose corresponds to a predetermined stage among the plurality of stages of the user's golf swing. Additionally, the Examiner asserts that Zhang et al. disclose “determining a comparison point for the extracted at least one frame corresponding to the predetermined stage of the user's golf swing” at least because Zhang et al. disclose that pose consistency detection is performed on each video frame of the user’s golf swing, that the start point of the video of the user’s pose is synchronized to the start point of the video of the standard pose during comparison for pose consistency detection and that pose consistency detection compares bone node vectors, skeleton information, of the user’s golf swing, which requires detecting bone nodes, i.e., comparison points, in the image frames of the user’s golf swing, to bone node vectors of the standard golf swing. Therefore, the Examiner asserts that Zhang et al. disclose, at least, “extracting at least one frame corresponding to a predetermined stage among the plurality of stages of the user's golf swing from photographed images of the user's golf swing” and “determining a comparison point for the extracted at least one frame corresponding to the predetermined stage of the user's golf swing”. The Examiner notes that Zhang et al. fail to disclose explicitly “wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds”. However, the Examiner asserts that analogous prior art Marks discloses the newly amended and disputed claim limitation, i.e., wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds, see at least figures 2A - 6 and 28, page 1 paragraphs 0006 - 0007, page 4 paragraphs 0075 - 0080, page 6 paragraphs 0096 - 0098 and 0104 - 0105, page 10 paragraph 0126, page 13 paragraph 0154 and page 17 paragraph 0200 of Marks wherein they disclose comparing components, i.e., comparison points, from a plurality of positions, i.e., stages, of a user’s golf swing to corresponding components of an ideal golf swing, that the components to be compared vary depending on the position of the golf swings and that the components are derived from images of the user’s and ideal golf swings. Thus, the Examiner asserts that Marks disclose, at least, “wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds”. Therefore, the Examiner asserts that Zhang et al. in view of Chen et al. in view of Marks disclose the aforementioned disputed claim limitations. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 6, 7, 9 - 12 and 15 - 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. U.S. Publication No. 2019/0347826 A1 in view of Chen et al. U.S. Publication No. 2021/0200993 A1 in view of Marks U.S. Publication No. 2013/0316840 A1. - With regards to claim 1, Zhang et al. disclose a method performed in a device (Zhang et al., Abstract, Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) for estimating information on a golf swing, (Zhang et al., Figs. 18A - 18B, Pg. 8 ¶ 0146 and 0150, Pg. 15 ¶ 0257 - 0261, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291 [“sports such as golf or tennis may require players to have accurate or standard poses. In the disclosure, players' poses may be evaluated and adjustment advice with respect to the poses may be provided to the user” and “FIG. 18B illustrates a flowchart of scoring based on comparing the standard pose with the user pose. In operation 1801, the user acquires a standard pose. The standard pose may be, for example, a golf swing pose shown in FIG. 18A”]) the device comprising one or more processors (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) and the method comprising the steps of: by the one or more processors, (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) light-weighting an artificial neural network model to obtain a light-weighted artificial neural network model; (Zhang et al., Fig. 6, Pg. 2 ¶ 0024 - 0025, Pg. 3 ¶ 0068 - 0070, Pg. 6 ¶ 0104 and 0110 - 0112, Pg. 7 ¶ 0123 - 0126, Pg. 8 ¶ 0140) by the one or more processors, (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) when a photographed image of a user's golf swing is acquired, (Zhang et al., Figs. 18A - 18B, Pg. 8 ¶ 0150, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0290) detecting at least one joint of the user from the photographed image using the light-weighted artificial neural network model; (Zhang et al., Figs. 1 - 6, 8 - 9 & 18A - 18B, Pg. 1 ¶ 0020 - Pg. 2 ¶ 0022, Pg. 3 ¶ 0068 - 0070, Pg. 3 ¶ 0075 - 0077, Pg. 6 ¶ 0110 - 0112, Pg. 9 ¶ 0161 - 0162, Pg. 13 ¶ 0221 - 0223, Pg. 15 ¶ 0257 - 0261, Pg. 16 ¶ 0284 - Pg. 17 ¶ 0291) by the one or more processors, (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) estimating a posture of the user with reference to at least one of a type of the at least one joint of the user, a position of the at least one joint of the user, a distance between the at least one joint of the user and at least one other joint of the user, and an angle formed between the at least one joint of the user and at least one other joint of the user; (Zhang et al., Abstract, Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0005 - 0007 and 0014 - 0017, Pg. 8 ¶ 0146 - 0151, Pg. 9 ¶ 0159 - 0166, Pg. 13 ¶ 0221 - 0224 and 0229 - 0230, Pg. 15 ¶ 0256 - 0258 and 0256 - 0271, Pg. 16 ¶ 0284, Pg. 17 ¶ 0297, Pg. 18 ¶ 0316) by the one or more processors, (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) comparing the user's golf swing and a golf swing of a comparison target with reference to the posture of the user and at least one comparison point; (Zhang et al., Figs. 9, 10, 16A - 16B & 18A - 18B, Pg. 1 ¶ 0014 - 0017, Pg. 9 ¶ 0159 - 0166, Pg. 13 ¶ 0221 - 0225 and 0230 - 0234, Pg. 15 ¶ 0256 - 0261, Pg. 15 ¶ 0265 - Pg. 16 ¶ 0272, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291, Pg. 18 ¶ 0316 - Pg. 19 ¶ 0319) and by the one or more processors, (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) estimating information on the user's golf swing on the basis of a result of the comparison, (Zhang et al., Figs. 18A - 18B, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291) wherein the at least one comparison point includes at least one of the position of the at least one joint, a position of a specific body part of the user estimated from the at least one joint, a reference line formed from the position of the at least one joint, and an angle formed from two or more reference lines, (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0005 - 0007 and 0014 - 0017, Pg. 6 ¶ 0111, Pg. 9 ¶ 0159 - 0162, Pg. 13 ¶ 0222 - 0226 and 0230 - 0235, Pg. 15 ¶ 0265 - 0271, Pg. 17 ¶ 0288 - 0291 and 0297, Pg. 18 ¶ 0314 - 0318) wherein the at least one comparison point is established separately for each partial motion constituting each of the user's golf swing and the comparison target's golf swing, (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0014 - 0017, Pg. 15 ¶ 0256 - 0261 and 0265 - 0271, Pg. 16 ¶ 0282 - Pg. 17 ¶ 0291 [“The video may be considered as a sequence of images. Therefore, the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose” and “if the number of image frames in which the player is playing golf is n, the system may perform pose estimation with respect to each frame and may respectively perform pose consistency evaluation between the pose of the player and the standard pose to obtain a sequence of scores”]) and established separately for each point of view with respect to the same partial motion, (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0014 - 0017, Pg. 15 ¶ 0256 - 0261 and 0265 - 0271, Pg. 16 ¶ 0282 - Pg. 17 ¶ 0291 [“The video may be considered as a sequence of images. Therefore, the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose” and “if the number of image frames in which the player is playing golf is n, the system may perform pose estimation with respect to each frame and may respectively perform pose consistency evaluation between the pose of the player and the standard pose to obtain a sequence of scores”]) wherein the user’s golf swing and the comparison target’s golf swing are constituted by a plurality of stages of partial motions, respectively, (Zhang et al., Fig. 18A, Pg. 15 ¶ 0259 - 0261, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291 [“the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose. In addition, the system may extract key frames in the video according to algorithms related to extraction of key frames of the video. The system may assign great weights to these key frames and perform weighted averaging on the consistency scores of all frames to obtain the final evaluation score of the pose of the player”, “if the number of image frames in which the player is playing golf is n, the system may perform pose estimation with respect to each frame and may respectively perform pose consistency evaluation between the pose of the player and the standard pose to obtain a sequence of scores”, “when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons” and “the user obtains images of a user's pose that may be similar to the obtained standard pose. In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose”]) and similarity between the user’s golf swing and the comparison target’s golf swing is identified in consideration of the plurality of stages of partial motions, (Zhang et al., Fig. 18A, Pg. 15 ¶ 0257 - 0261 and 0265 - 0271, Pg. 16 ¶ 0280 - Pg. 17 ¶ 0291, Pg. 18 ¶ 0316 - Pg. 19 ¶ 0319 [“when the similarity between bone node vectors corresponding to the objects is determined, the similarity may be determined for all bone node vectors, or it may be determined whether the poses of the objects are consistent only based on the similarity between key bone node vectors”, “If a player wants to evaluate his/her sport pose 1810 or acquire adjustment advice, the system may perform pose estimation with respect to the player based on the image to obtain skeleton information. Next, the system may perform pose consistency detection with respect to the player and the object in a standard pose based on the skeleton information of the player and the object in a standard pose”, “the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose. In addition, the system may extract key frames in the video according to algorithms related to extraction of key frames of the video. The system may assign great weights to these key frames and perform weighted averaging on the consistency scores of all frames to obtain the final evaluation score of the pose of the player”, “when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons… Thus, image frames in the standard video, which correspond to all image frames in the video taken by the user, are determined. The similarity between skeletons is calculated frame by frame, and then the pose of the user is scored”, “the user obtains images of a user's pose that may be similar to the obtained standard pose. In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose” and “based on the comparison between the skeleton information of the standard pose and the skeleton information of the user pose, the degree of user pose accuracy may be evaluated and scored. In an embodiment, as described referring to FIG. 18A, the length and/or the angle made using nodes are detected and compared with a predetermined threshold value. The comparison result may represent whether the user pose is close to the standard pose and how much close the user pose is”]) and wherein the method further comprises: extracting at least one frame corresponding to a predetermined stage among the plurality of stages of the user’s golf swing from photographed images of the user's golf swing; (Zhang et al., Fig. 18A, Pg. 15 ¶ 0259 - 0261, Pg. 16 ¶ 0284 - Pg. 17 ¶ 0291 [“when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons” and “In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose”]) determining a comparison point for the extracted at least one frame corresponding to the predetermined stage of the user’s golf swing; (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0014 - 0017, Pg. 15 ¶ 0256 - 0261 and 0265 - 0271, Pg. 16 ¶ 0280 - Pg. 17 ¶ 0291 [“when the similarity between bone node vectors corresponding to the objects is determined, the similarity may be determined for all bone node vectors, or it may be determined whether the poses of the objects are consistent only based on the similarity between key bone node vectors”, “If a player wants to evaluate his/her sport pose 1810 or acquire adjustment advice, the system may perform pose estimation with respect to the player based on the image to obtain skeleton information. Next, the system may perform pose consistency detection with respect to the player and the object in a standard pose based on the skeleton information of the player and the object in a standard pose”, “the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose” and “based on the comparison between the skeleton information of the standard pose and the skeleton information of the user pose, the degree of user pose accuracy may be evaluated and scored. In an embodiment, as described referring to FIG. 18A, the length and/or the angle made using nodes are detected and compared with a predetermined threshold value. The comparison result may represent whether the user pose is close to the standard pose and how much close the user pose is”]) and comparing the comparison point for the extracted at least one frame to a comparison point for a frame corresponding to a stage corresponding to the predetermined stage of the user’s golf swing among the plurality of stages of the golf swing of the comparison target. (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0014 - 0017, Pg. 15 ¶ 0256 - 0261 and 0265 - 0271, Pg. 16 ¶ 0280 - Pg. 17 ¶ 0291 [“when the similarity between bone node vectors corresponding to the objects is determined, the similarity may be determined for all bone node vectors, or it may be determined whether the poses of the objects are consistent only based on the similarity between key bone node vectors”, “If a player wants to evaluate his/her sport pose 1810 or acquire adjustment advice, the system may perform pose estimation with respect to the player based on the image to obtain skeleton information. Next, the system may perform pose consistency detection with respect to the player and the object in a standard pose based on the skeleton information of the player and the object in a standard pose”, “image frames in the standard video, which correspond to all image frames in the video taken by the user, are determined. The similarity between skeletons is calculated frame by frame, and then the pose of the user is scored”, “the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose” and “based on the comparison between the skeleton information of the standard pose and the skeleton information of the user pose, the degree of user pose accuracy may be evaluated and scored. In an embodiment, as described referring to FIG. 18A, the length and/or the angle made using nodes are detected and compared with a predetermined threshold value. The comparison result may represent whether the user pose is close to the standard pose and how much close the user pose is.”]) Zhang et al. fail to disclose explicitly light-weighting an artificial neural network model using depthwise convolution and pointwise convolution; and wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds. Pertaining to analogous art, Chen et al. disclose light-weighting an artificial neural network model using depthwise convolution and pointwise convolution to obtain a light-weighted artificial neural network model. (Chen et al. Abstract, Figs. 4 - 8 & 10 - 11, Pg. 1 ¶ 0008 - 0009, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0035 - Pg. 4 ¶ 0038, Pg. 4 ¶ 0042 - 0045, Pg. 5 ¶ 0048 - 0053, Pg. 5 ¶ 0055 - Pg. 6 ¶ 0059, Pg. 9 ¶ 0070 - 0071 and 0075, Pg. 10 ¶ 0079 - Pg. 11 ¶ 0084) Chen et al. fail to disclose explicitly wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds. Pertaining to analogous art, Marks discloses when a photographed image of a user’s golf swing is acquired, detecting at least one joint of the user from the photographed image; (Marks, Figs. 2A - 7, 10A, 12, 16, 18, 22, 25 & 28, Pg. 1 ¶ 0007, Pg. 4 ¶ 0076 - 0080, Pg. 5 ¶ 0086 - 0090, Pg. 6 ¶ 0096 - 0099 and 0104 - 0107, Pg. 7 ¶ 0111 - 0113, Pg. 8 ¶ 0115 - 0118, Pg. 10 ¶ 0126, Pg. 11 ¶ 0136, Pg. 13 ¶ 0154 - Pg. 14 ¶ 0157, Pg. 16 ¶ 0180 - 0182, Pg. 17 ¶ 0197 - 0201) comparing the user’s golf swing and a golf swing of a comparison target with reference to the posture of the user and at least one comparison point; (Marks, Abstract, Figs. 6 10A & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0079, Pg. 5 ¶ 0084 and 0091 - 0093, Pg. 6 ¶ 0096 - 0100 and 0103 - 0107, Pg. 7 ¶ 0111 - 0113, Pg. 17 ¶ 0192 and 0197 - 0198) and estimating information on the user’s golf swing on the basis of a result of the comparison, (Marks, Abstract, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0009, Pg. 4 ¶ 0079, Pg. 5 ¶ 0084, Pg. 6 ¶ 0096 - 0101, Pg. 17 ¶ 0191 - 0193, 0197 - 0198 and 0202 - 0203) wherein the at least one comparison point includes at least one of the position of the at least one joint, a position of a specific body part of the user estimated from the at least one joint, a reference line formed from the position of the at least one joint, and an angle formed from two or more reference lines, (Marks, Figs. 2A - 4B, 10A, 12, 16, 18, 22, 25 & 28, Pg. 1 ¶ 0021, Pg. 4 ¶ 0076 - 0079, Pg. 6 ¶ 0104 - 0107, Pg. 7 ¶ 0111 - 0113, Pg. 8 ¶ 0115 - 0118, Pg. 10 ¶ 0126, Pg. 11 ¶ 0136, Pg. 13 ¶ 0154 - Pg. 14 ¶ 0157, Pg. 17 ¶ 0197 - 0201) wherein the at least one comparison point is established separately for each partial motion constituting each of the user's golf swing and the comparison target's golf swing, (Marks, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0093, Pg. 6 ¶ 0096 - 0099 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 17 ¶ 0192 and 0197 - 0200) wherein the user’s golf swing and the comparison target’s golf swing are constituted by a plurality of stages of partial motions, respectively, (Marks, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0093, Pg. 6 ¶ 0096 - 0099 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 17 ¶ 0198 - 0200) and similarity between the user’s golf swing and the comparison target’s golf swing is identified in consideration of the plurality of stages of partial motions, (Marks, Abstract, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0079, Pg. 5 ¶ 0084, Pg. 6 ¶ 0096 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 16 ¶ 0180 - 0181, Pg. 17 ¶ 0192 - 0193 and 0197 - 0203) and wherein the method further comprises: extracting at least one frame corresponding to a predetermined stage among the plurality of stages of the user’s golf swing from photographed images of the user's golf swing; (Marks, Figs. 5, 6 & 28, Pg. 1 ¶ 0007, Pg. 4 ¶ 0075 - 0080, Pg. 5 ¶ 0085 - 0093, Pg. 6 ¶ 0096 - 0099 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 16 ¶ 0180 - 0182, Pg. 17 ¶ 0197 - 0200 and 0203) determining a comparison point for the extracted at least one frame corresponding to the predetermined stage of the user’s golf swing, (Marks, Figs. 6 & 28, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0090, Pg. 6 ¶ 0096 - 0098 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 17 ¶ 0198 - 0200) wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds; (Marks, Figs. 6 & 28, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0090, Pg. 6 ¶ 0096 - 0098 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 17 ¶ 0198 - 0200) and comparing the comparison point for the extracted at least one frame to a comparison point for a frame corresponding to a stage corresponding to the predetermined stage of the user’s golf swing among the plurality of stages of the golf swing of the comparison target. (Marks, Abstract, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0075 - 0080, Pg. 5 ¶ 0084 - 0093, Pg. 6 ¶ 0096 - 0099 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 16 ¶ 0180 - 0182, Pg. 17 ¶ 0192 - 0193 and 0197 - 0203) Zhang et al. and Chen et al. are combinable because they are both directed towards image processing systems that perform object recognition operations utilizing light-weighted artificial neural network models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhang et al. with the teachings of Chen et al. This modification would have been prompted in order to substitute the neural network model compression process of Zhang et al. for the depthwise convolution and pointwise convolution neural network light-weighting technique of Chen et al. The depthwise convolution and pointwise convolution neural network light-weighting technique of Chen et al. could be substituted in place of the neural network model compression process of Zhang et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the depthwise convolution and pointwise convolution neural network light-weighting technique of Chen et al. would be utilized to realize the lightweight neural network of Zhang et al. that is utilized to detect at least one joint of the user. Furthermore, this modification would have been prompted by the teachings and suggestions of Zhang et al. that other ways of compressing or a combination of multiple compression ways may be used to compress their neural network to realize their lightweight neural network, see at least page 7 paragraphs 0124 - 0127 of Zhang et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the depthwise convolution and pointwise convolution neural network light-weighting technique of Chen et al. would be utilized to realize the lightweight neural network of Zhang et al. In addition, Zhang et al. in view of Chen et al. and Marks are combinable because they are all directed towards image processing systems and, similar to Zhang et al., Marks is also directed towards an image processing system that automatically evaluates a user’s golf swing by processing image data of the user’s golf swing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Zhang et al. in view of Chen et al. with the teachings of Marks. This modification would have been prompted in order to enhance the combined base device of Zhang et al. in view of Chen et al. with the well-known and applicable technique Marks applied to a comparable device. Varying the comparison point determined for the extracted at least one frame depending on the stage to which the extracted at least one frame corresponds, as taught by Marks, would enhance the combined base device by improving its ability to accurately, reliably and efficiently compare the user’s golf swing to the comparison target’s golf swing and thus estimate information on the user’s golf swing since instead of utilizing all possible comparison points in each image frame of the user’s golf swing to evaluate the user’s golf swing only the most important comparison point(s) that best represents the stage of the user’s golf swing to which an image frame belongs would be utilized to evaluate the user’s golf swing in order to ensure that the user’s golf swing is the main focus of the evaluation and reduce unnecessary and erroneous comparisons during the evaluation. Furthermore, this modification would have been prompted by the teachings and suggestions of Zhang et al. that it may be determined whether poses of objects are consistent only based on the similarity between key bone node vectors and that the key bone node vectors may be preset by the user, see at least page 16 paragraph 0280 of Zhang et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the comparison point determined for the extracted at least one frame would be varied depending on the stage to which the extracted at least one frame corresponds so as to improve the ability of the combined base device to accurately, reliably and efficiently compare the user’s golf swing to the comparison target’s golf swing and thus estimate information on the user’s golf swing since instead of utilizing all possible comparison points in each image frame of the user’s golf swing to evaluate the user’s golf swing only the most important comparison point(s) that best represents the stage of the user’s golf swing to which an image frame belongs would be utilized to evaluate the user’s golf swing. Therefore, it would have been obvious to combine Zhang et al. with Chen et al. and Marks to obtain the invention as specified in claim 1. - With regards to claim 6, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1. Zhang et al. fail to disclose explicitly wherein in the step of comparing the user’s golf swing and the comparison target’s golf swing, information on a golf club is estimated with reference to at least one of the type of the at least one joint of the user, the position of the at least one joint of the user, the distance between the at least one joint of the user and at least one other joint of the user, and the angle formed between the at least one joint of the user and at least one other joint of the user, and the user's golf swing and the comparison target's golf swing are compared with further reference to the estimated information on the golf club. Pertaining to analogous art, Marks discloses wherein in the step of comparing the user’s golf swing and the comparison target’s golf swing, information on a golf club is estimated with reference to at least one of the type of the at least one joint of the user, the position of the at least one joint of the user, the distance between the at least one joint of the user and at least one other joint of the user, and the angle formed between the at least one joint of the user and at least one other joint of the user, (Marks, Figs. 3A - 4A & 22, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0077 - 0079, Pg. 6 ¶ 0096 - 0098, Pg. 10 ¶ 0126, Pg. 11 ¶ 0136 - Pg. 12 ¶ 0140, Pg. 13 ¶ 0154, Pg. 14 ¶ 0163, Pg. 16 ¶ 0181 [“the measurement is of the angle between the left forearm and the club shaft” and “wrist component 230-5 is determined by the measurement of the angle between the club shaft and the right forearm at finish of swing”]) and the user's golf swing and the comparison target's golf swing are compared with further reference to the estimated information on the golf club. (Marks, Figs. 3A - 4A & 22, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0077 - 0079, Pg. 6 ¶ 0096 - 0098, Pg. 10 ¶ 0126, Pg. 11 ¶ 0136 - Pg. 12 ¶ 0140, Pg. 13 ¶ 0154, Pg. 14 ¶ 0163, Pg. 16 ¶ 0181) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Zhang et al. in view of Chen et al. in view of Marks with additional teachings of Marks. This modification would have been prompted in order to enhance the combined base device of Zhang et al. in view of Chen et al. in view of Marks with the well-known and applicable technique Marks applied to a comparable device. Estimating information on a golf club with reference to at least one of the type of the at least one joint of the user, the position of the at least one joint of the user, the distance between the at least one joint of the user and at least one other joint of the user, and the angle formed between the at least one joint of the user and at least one other joint of the user, and comparing the user's golf swing and the comparison target's golf swing with further reference to the estimated information on the golf club, as taught by Marks, would enhance the combined base device by improving its ability to accurately and reliably estimate information on the user’s golf swing since, in addition to the position of the at least one detected joint, information on the user’s golf club would additionally be estimated and taken into account when estimating information on the user’s golf swing thereby allowing for a more complete and thorough evaluation of the user’s golf swing to be conducted since the user’s body positioning as well as positioning of their golf club would be evaluated when estimating information on the user’s golf swing. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that information on a golf club would be estimated with reference to at least one of the type of the at least one joint of the user, the position of the at least one joint of the user, the distance between the at least one joint of the user and at least one other joint of the user, and the angle formed between the at least one joint of the user and at least one other joint of the user, and the user's golf swing and the comparison target's golf swing would be compared with further reference to the estimated information on the golf club so as to enable a more complete and thorough evaluation of the user’s golf swing to be conducted and improve the accuracy and reliability of the information on the user’s golf swing estimated since the user’s body positioning as well as positioning of their golf club would be evaluated when estimating information on the user’s golf swing. Therefore, it would have been obvious to combine Zhang et al. in view of Chen et al. in view of Marks with additional teachings of Marks to obtain the invention as specified in claim 6. - With regards to claim 7, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1, wherein storage of the acquired photographed image is started when the posture of the user satisfies a predetermined storage start condition, (Zhang et al., Figs. 9 & 18A - 18B, Pg. 3 ¶ 0066, Pg. 8 ¶ 0141 - 0143, Pg. 9 ¶ 0159 - 0166, Pg. 16 ¶ 0284 - Pg. 17 ¶ 0291 [“when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons. Image frames in the standard video, corresponding to image frames between every two key frames, are determined by a linear difference. Thus, image frames in the standard video, which correspond to all image frames in the video taken by the user, are determined. The similarity between skeletons is calculated frame by frame, and then the pose of the user is scored”]) and the storage of the acquired photographed image is ended when the posture of the user satisfies a predetermined storage end condition. (Zhang et al., Figs. 9 & 18A - 18B, Pg. 3 ¶ 0066, Pg. 8 ¶ 0141 - 0143, Pg. 9 ¶ 0159 - 0166, Pg. 16 ¶ 0284 - Pg. 17 ¶ 0291 [“when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons. Image frames in the standard video, corresponding to image frames between every two key frames, are determined by a linear difference. Thus, image frames in the standard video, which correspond to all image frames in the video taken by the user, are determined. The similarity between skeletons is calculated frame by frame, and then the pose of the user is scored”]) - The Examiner notes, with regards to claim 9, that claim 9 does not positively recite a functional interrelationship between the computer program and an intended computer system for executing the computer program and absent such a positively recited interrelationship the broadest reasonable interpretation of the limitations that the computer program is intended to perform encompasses interpretations wherein those limitations are non-functional because the claim does not limit the computer program to an embodiment wherein the computer program is executed by an intended computer system in order to perform its recited limitations. The Examiner asserts that non-functional limitations are not given patentable weight, see at least MPEP § 2111.05. Therefore, the Examiner suggests amending the claim to positively recite a functional relationship between the computer program and an intended computer system for executing the computer program in order to give patentable weight to the limitations that the computer program is configured to perform. However, in order to expedite prosecution, the Examiner will examine the claim as if each and every limitation has patentable weight. Appropriate correction is required. - With regards to claim 9, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1. ([See analysis of Claim 1 provided herein above.]) Zhang et al. disclose a non-transitory computer-readable recording medium having stored thereon a computer program for executing (Zhang et al., Pg. 3 ¶ 0066, Pg. 19 ¶ 0321 - 0329) the method of Claim 1. ([Zhang et al. in view of Chen et al. in view of Marks disclose the method of claim 1, see analysis of claim 1 provided herein above.]) - With regards to claim 10, Zhang et al. disclose a device for estimating information on a golf swing, (Zhang et al., Figs. 7, 18A - 18B & 22 - 23, Pg. 1 ¶ 0005 - 0007, Pg. 2 ¶ 0026, Pg. 3 ¶ 0066, Pg. 4 ¶ 0077, Pg. 7 ¶ 0127 - 0129, Pg. 8 ¶ 0146 and 0150, Pg. 15 ¶ 0257 - 0261, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291, Pg. 18 ¶ 0309 - Pg. 19 ¶ 0326 [“sports such as golf or tennis may require players to have accurate or standard poses. In the disclosure, players' poses may be evaluated and adjustment advice with respect to the poses may be provided to the user” and “FIG. 18B illustrates a flowchart of scoring based on comparing the standard pose with the user pose. In operation 1801, the user acquires a standard pose. The standard pose may be, for example, a golf swing pose shown in FIG. 18A”]) the device comprising one or more processors configured (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) to: light-weight an artificial neural network model to obtain a light-weighted artificial neural network model; (Zhang et al., Fig. 6, Pg. 2 ¶ 0024 - 0025, Pg. 3 ¶ 0068 - 0070, Pg. 6 ¶ 0104 and 0110 - 0112, Pg. 7 ¶ 0123 - 0126, Pg. 8 ¶ 0140) when a photographed image of a user's golf swing is acquired, (Zhang et al., Figs. 18A - 18B, Pg. 8 ¶ 0150, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0290) detect at least one joint of the user from the photographed image using the light-weighted artificial neural network model; (Zhang et al., Figs. 1 - 6, 8 - 9 & 18A - 18B, Pg. 1 ¶ 0020 - Pg. 2 ¶ 0022, Pg. 3 ¶ 0068 - 0070, Pg. 3 ¶ 0075 - 0077, Pg. 6 ¶ 0110 - 0112, Pg. 9 ¶ 0161 - 0162, Pg. 13 ¶ 0221 - 0223, Pg. 15 ¶ 0257 - 0261, Pg. 16 ¶ 0284 - Pg. 17 ¶ 0291) estimate a posture of the user with reference to at least one of a type of the at least one joint of the user, a position of the at least one joint of the user, a distance between the at least one joint of the user and at least one other joint of the user, and an angle formed between the at least one joint of the user and at least one other joint of the user; (Zhang et al., Abstract, Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0005 - 0007 and 0014 - 0017, Pg. 8 ¶ 0146 - 0151, Pg. 9 ¶ 0159 - 0166, Pg. 13 ¶ 0221 - 0224 and 0229 - 0230, Pg. 15 ¶ 0256 - 0258 and 0256 - 0271, Pg. 16 ¶ 0284, Pg. 17 ¶ 0297, Pg. 18 ¶ 0316) compare the user's golf swing and a golf swing of a comparison target with reference to the posture of the user and at least one comparison point; (Zhang et al., Figs. 9, 10, 16A - 16B & 18A - 18B, Pg. 1 ¶ 0014 - 0017, Pg. 9 ¶ 0159 - 0166, Pg. 13 ¶ 0221 - 0225 and 0230 - 0234, Pg. 15 ¶ 0256 - 0261, Pg. 15 ¶ 0265 - Pg. 16 ¶ 0272, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291, Pg. 18 ¶ 0316 - Pg. 19 ¶ 0319) estimate information on the user's golf swing on the basis of a result of the comparison, (Zhang et al., Figs. 18A - 18B, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291) wherein the at least one comparison point includes at least one of the position of the at least one joint, a position of a specific body part of the user estimated from the at least one joint, a reference line formed from the position of the at least one joint, and an angle formed from two or more reference lines, (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0005 - 0007 and 0014 - 0017, Pg. 6 ¶ 0111, Pg. 9 ¶ 0159 - 0162, Pg. 13 ¶ 0222 - 0226 and 0230 - 0235, Pg. 15 ¶ 0265 - 0271, Pg. 17 ¶ 0288 - 0291 and 0297, Pg. 18 ¶ 0314 - 0318) wherein the at least one comparison point is established separately for each partial motion constituting each of the user's golf swing and the comparison target's golf swing, (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0014 - 0017, Pg. 15 ¶ 0256 - 0261 and 0265 - 0271, Pg. 16 ¶ 0282 - Pg. 17 ¶ 0291 [“The video may be considered as a sequence of images. Therefore, the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose” and “if the number of image frames in which the player is playing golf is n, the system may perform pose estimation with respect to each frame and may respectively perform pose consistency evaluation between the pose of the player and the standard pose to obtain a sequence of scores”]) and established separately for each point of view with respect to the same partial motion, (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0014 - 0017, Pg. 15 ¶ 0256 - 0261 and 0265 - 0271, Pg. 16 ¶ 0282 - Pg. 17 ¶ 0291 [“The video may be considered as a sequence of images. Therefore, the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose” and “if the number of image frames in which the player is playing golf is n, the system may perform pose estimation with respect to each frame and may respectively perform pose consistency evaluation between the pose of the player and the standard pose to obtain a sequence of scores”]) wherein the user’s golf swing and the comparison target’s golf swing are constituted by a plurality of stages of partial motions, respectively, (Zhang et al., Fig. 18A, Pg. 15 ¶ 0259 - 0261, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291 [“the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose. In addition, the system may extract key frames in the video according to algorithms related to extraction of key frames of the video. The system may assign great weights to these key frames and perform weighted averaging on the consistency scores of all frames to obtain the final evaluation score of the pose of the player”, “if the number of image frames in which the player is playing golf is n, the system may perform pose estimation with respect to each frame and may respectively perform pose consistency evaluation between the pose of the player and the standard pose to obtain a sequence of scores”, “when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons” and “the user obtains images of a user's pose that may be similar to the obtained standard pose. In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose”]) and similarity between the user’s golf swing and the comparison target’s golf swing is identified in consideration of the plurality of stages of partial motions, (Zhang et al., Fig. 18A, Pg. 15 ¶ 0257 - 0261 and 0265 - 0271, Pg. 16 ¶ 0280 - Pg. 17 ¶ 0291, Pg. 18 ¶ 0316 - Pg. 19 ¶ 0319 [“when the similarity between bone node vectors corresponding to the objects is determined, the similarity may be determined for all bone node vectors, or it may be determined whether the poses of the objects are consistent only based on the similarity between key bone node vectors”, “If a player wants to evaluate his/her sport pose 1810 or acquire adjustment advice, the system may perform pose estimation with respect to the player based on the image to obtain skeleton information. Next, the system may perform pose consistency detection with respect to the player and the object in a standard pose based on the skeleton information of the player and the object in a standard pose”, “the standard degree of the pose of the player may be scored by identifying the similarity between the pose of the player in each image frame and the standard pose. In addition, the system may extract key frames in the video according to algorithms related to extraction of key frames of the video. The system may assign great weights to these key frames and perform weighted averaging on the consistency scores of all frames to obtain the final evaluation score of the pose of the player”, “when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons… Thus, image frames in the standard video, which correspond to all image frames in the video taken by the user, are determined. The similarity between skeletons is calculated frame by frame, and then the pose of the user is scored”, “the user obtains images of a user's pose that may be similar to the obtained standard pose. In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose” and “based on the comparison between the skeleton information of the standard pose and the skeleton information of the user pose, the degree of user pose accuracy may be evaluated and scored. In an embodiment, as described referring to FIG. 18A, the length and/or the angle made using nodes are detected and compared with a predetermined threshold value. The comparison result may represent whether the user pose is close to the standard pose and how much close the user pose is”]) and wherein the one or more processors (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) are further configured to: extract at least one frame corresponding to a predetermined stage among the plurality of stages of the user’s golf swing from photographed images of the user's golf swing; (Zhang et al., Fig. 18A, Pg. 15 ¶ 0259 - 0261, Pg. 16 ¶ 0284 - Pg. 17 ¶ 0291 [“when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons” and “In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose”]) determine a comparison point for the extracted at least one frame corresponding to the predetermined stage of the user’s golf swing; (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0014 - 0017, Pg. 15 ¶ 0256 - 0261 and 0265 - 0271, Pg. 16 ¶ 0280 - Pg. 17 ¶ 0291 [“when the similarity between bone node vectors corresponding to the objects is determined, the similarity may be determined for all bone node vectors, or it may be determined whether the poses of the objects are consistent only based on the similarity between key bone node vectors”, “If a player wants to evaluate his/her sport pose 1810 or acquire adjustment advice, the system may perform pose estimation with respect to the player based on the image to obtain skeleton information. Next, the system may perform pose consistency detection with respect to the player and the object in a standard pose based on the skeleton information of the player and the object in a standard pose”, “the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose” and “based on the comparison between the skeleton information of the standard pose and the skeleton information of the user pose, the degree of user pose accuracy may be evaluated and scored. In an embodiment, as described referring to FIG. 18A, the length and/or the angle made using nodes are detected and compared with a predetermined threshold value. The comparison result may represent whether the user pose is close to the standard pose and how much close the user pose is”]) and compare the comparison point for the extracted at least one frame to a comparison point for a frame corresponding to a stage corresponding to the predetermined stage of the user’s golf swing among the plurality of stages of the golf swing of the comparison target. (Zhang et al., Figs. 9, 10, 16A, 16B, 18A & 18B, Pg. 1 ¶ 0014 - 0017, Pg. 15 ¶ 0256 - 0261 and 0265 - 0271, Pg. 16 ¶ 0280 - Pg. 17 ¶ 0291 [“when the similarity between bone node vectors corresponding to the objects is determined, the similarity may be determined for all bone node vectors, or it may be determined whether the poses of the objects are consistent only based on the similarity between key bone node vectors”, “If a player wants to evaluate his/her sport pose 1810 or acquire adjustment advice, the system may perform pose estimation with respect to the player based on the image to obtain skeleton information. Next, the system may perform pose consistency detection with respect to the player and the object in a standard pose based on the skeleton information of the player and the object in a standard pose”, “image frames in the standard video, which correspond to all image frames in the video taken by the user, are determined. The similarity between skeletons is calculated frame by frame, and then the pose of the user is scored”, “the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose” and “based on the comparison between the skeleton information of the standard pose and the skeleton information of the user pose, the degree of user pose accuracy may be evaluated and scored. In an embodiment, as described referring to FIG. 18A, the length and/or the angle made using nodes are detected and compared with a predetermined threshold value. The comparison result may represent whether the user pose is close to the standard pose and how much close the user pose is.”]) Zhang et al. fail to disclose explicitly light-weighting an artificial neural network model using depthwise convolution and pointwise convolution; and wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds. Pertaining to analogous art, Chen et al. disclose light-weighting an artificial neural network model using depthwise convolution and pointwise convolution to obtain a light-weighted artificial neural network model. (Chen et al. Abstract, Figs. 4 - 8 & 10 - 11, Pg. 1 ¶ 0008 - 0009, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0035 - Pg. 4 ¶ 0038, Pg. 4 ¶ 0042 - 0045, Pg. 5 ¶ 0048 - 0053, Pg. 5 ¶ 0055 - Pg. 6 ¶ 0059, Pg. 9 ¶ 0070 - 0071 and 0075, Pg. 10 ¶ 0079 - Pg. 11 ¶ 0084) Chen et al. fail to disclose explicitly wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds. Pertaining to analogous art, Marks discloses when a photographed image of a user’s golf swing is acquired, detecting at least one joint of the user from the photographed image; (Marks, Figs. 2A - 7, 10A, 12, 16, 18, 22, 25 & 28, Pg. 1 ¶ 0007, Pg. 4 ¶ 0076 - 0080, Pg. 5 ¶ 0086 - 0090, Pg. 6 ¶ 0096 - 0099 and 0104 - 0107, Pg. 7 ¶ 0111 - 0113, Pg. 8 ¶ 0115 - 0118, Pg. 10 ¶ 0126, Pg. 11 ¶ 0136, Pg. 13 ¶ 0154 - Pg. 14 ¶ 0157, Pg. 16 ¶ 0180 - 0182, Pg. 17 ¶ 0197 - 0201) comparing the user’s golf swing and a golf swing of a comparison target with reference to the posture of the user and at least one comparison point; (Marks, Abstract, Figs. 6 10A & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0079, Pg. 5 ¶ 0084 and 0091 - 0093, Pg. 6 ¶ 0096 - 0100 and 0103 - 0107, Pg. 7 ¶ 0111 - 0113, Pg. 17 ¶ 0192 and 0197 - 0198) and estimating information on the user’s golf swing on the basis of a result of the comparison, (Marks, Abstract, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0009, Pg. 4 ¶ 0079, Pg. 5 ¶ 0084, Pg. 6 ¶ 0096 - 0101, Pg. 17 ¶ 0191 - 0193, 0197 - 0198 and 0202 - 0203) wherein the at least one comparison point includes at least one of the position of the at least one joint, a position of a specific body part of the user estimated from the at least one joint, a reference line formed from the position of the at least one joint, and an angle formed from two or more reference lines, (Marks, Figs. 2A - 4B, 10A, 12, 16, 18, 22, 25 & 28, Pg. 1 ¶ 0021, Pg. 4 ¶ 0076 - 0079, Pg. 6 ¶ 0104 - 0107, Pg. 7 ¶ 0111 - 0113, Pg. 8 ¶ 0115 - 0118, Pg. 10 ¶ 0126, Pg. 11 ¶ 0136, Pg. 13 ¶ 0154 - Pg. 14 ¶ 0157, Pg. 17 ¶ 0197 - 0201) wherein the at least one comparison point is established separately for each partial motion constituting each of the user's golf swing and the comparison target's golf swing, (Marks, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0093, Pg. 6 ¶ 0096 - 0099 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 17 ¶ 0192 and 0197 - 0200) wherein the user’s golf swing and the comparison target’s golf swing are constituted by a plurality of stages of partial motions, respectively, (Marks, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0093, Pg. 6 ¶ 0096 - 0099 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 17 ¶ 0198 - 0200) and similarity between the user’s golf swing and the comparison target’s golf swing is identified in consideration of the plurality of stages of partial motions, (Marks, Abstract, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0079, Pg. 5 ¶ 0084, Pg. 6 ¶ 0096 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 16 ¶ 0180 - 0181, Pg. 17 ¶ 0192 - 0193 and 0197 - 0203) and wherein the one or more processors are further configured (Marks, Fig. 5, Pg. 1 ¶ 0007, Pg. 4 ¶ 0079 - 0081) to: extract at least one frame corresponding to a predetermined stage among the plurality of stages of the user’s golf swing from photographed images of the user's golf swing; (Marks, Figs. 5, 6 & 28, Pg. 1 ¶ 0007, Pg. 4 ¶ 0075 - 0080, Pg. 5 ¶ 0085 - 0093, Pg. 6 ¶ 0096 - 0099 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 16 ¶ 0180 - 0182, Pg. 17 ¶ 0197 - 0200 and 0203) determine a comparison point for the extracted at least one frame corresponding to the predetermined stage of the user’s golf swing, (Marks, Figs. 6 & 28, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0090, Pg. 6 ¶ 0096 - 0098 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 17 ¶ 0198 - 0200) wherein the comparison point is varied depending on the stage to which the extracted at least one frame corresponds; (Marks, Figs. 6 & 28, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0090, Pg. 6 ¶ 0096 - 0098 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 17 ¶ 0198 - 0200) and compare the comparison point for the extracted at least one frame to a comparison point for a frame corresponding to a stage corresponding to the predetermined stage of the user’s golf swing among the plurality of stages of the golf swing of the comparison target. (Marks, Abstract, Figs. 6 & 28, Pg. 1 ¶ 0006 - 0007, Pg. 4 ¶ 0075 - 0080, Pg. 5 ¶ 0084 - 0093, Pg. 6 ¶ 0096 - 0099 and 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 16 ¶ 0180 - 0182, Pg. 17 ¶ 0192 - 0193 and 0197 - 0203) Zhang et al. and Chen et al. are combinable because they are both directed towards image processing systems that perform object recognition operations utilizing light-weighted artificial neural network models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhang et al. with the teachings of Chen et al. This modification would have been prompted in order to substitute the neural network model compression process of Zhang et al. for the depthwise convolution and pointwise convolution neural network light-weighting technique of Chen et al. The depthwise convolution and pointwise convolution neural network light-weighting technique of Chen et al. could be substituted in place of the neural network model compression process of Zhang et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the depthwise convolution and pointwise convolution neural network light-weighting technique of Chen et al. would be utilized to realize the lightweight neural network of Zhang et al. that is utilized to detect at least one joint of the user. Furthermore, this modification would have been prompted by the teachings and suggestions of Zhang et al. that other ways of compressing or a combination of multiple compression ways may be used to compress their neural network to realize their lightweight neural network, see at least page 7 paragraphs 0124 - 0127 of Zhang et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the depthwise convolution and pointwise convolution neural network light-weighting technique of Chen et al. would be utilized to realize the lightweight neural network of Zhang et al. In addition, Zhang et al. in view of Chen et al. and Marks are combinable because they are all directed towards image processing systems and, similar to Zhang et al., Marks is also directed towards an image processing system that automatically evaluates a user’s golf swing by processing image data of the user’s golf swing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Zhang et al. in view of Chen et al. with the teachings of Marks. This modification would have been prompted in order to enhance the combined base device of Zhang et al. in view of Chen et al. with the well-known and applicable technique Marks applied to a comparable device. Varying the comparison point determined for the extracted at least one frame depending on the stage to which the extracted at least one frame corresponds, as taught by Marks, would enhance the combined base device by improving its ability to accurately, reliably and efficiently compare the user’s golf swing to the comparison target’s golf swing and thus estimate information on the user’s golf swing since instead of utilizing all possible comparison points in each image frame of the user’s golf swing to evaluate the user’s golf swing only the most important comparison point(s) that best represents the stage of the user’s golf swing to which an image frame belongs would be utilized to evaluate the user’s golf swing in order to ensure that the user’s golf swing is the main focus of the evaluation and reduce unnecessary and erroneous comparisons during the evaluation. Furthermore, this modification would have been prompted by the teachings and suggestions of Zhang et al. that it may be determined whether poses of objects are consistent only based on the similarity between key bone node vectors and that the key bone node vectors may be preset by the user, see at least page 16 paragraph 0280 of Zhang et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the comparison point determined for the extracted at least one frame would be varied depending on the stage to which the extracted at least one frame corresponds so as to improve the ability of the combined base device to accurately, reliably and efficiently compare the user’s golf swing to the comparison target’s golf swing and thus estimate information on the user’s golf swing since instead of utilizing all possible comparison points in each image frame of the user’s golf swing to evaluate the user’s golf swing only the most important comparison point(s) that best represents the stage of the user’s golf swing to which an image frame belongs would be utilized to evaluate the user’s golf swing. Therefore, it would have been obvious to combine Zhang et al. with Chen et al. and Marks to obtain the invention as specified in claim 10. - With regards to claim 11, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1. Zhang et al. fail to disclose explicitly wherein a depth of each kernel in the depthwise convolution is reduced to 1 and each of a width and a height of each kernel in the pointwise convolution is reduced to 1. Pertaining to analogous art, Chen et al. disclose wherein a depth of each kernel in the depthwise convolution is reduced to 1 (Chen et al., Figs. 4, 5 & 8, Pg. 1 ¶ 0008, Pg. 3 ¶ 0036, Pg. 4 ¶ 0042 - 0045, Pg. 6 ¶ 0057 - 0059) and each of a width and a height of each kernel in the pointwise convolution is reduced to 1. (Chen et al., Figs. 4 & 6 - 8, Pg. 1 ¶ 0009, Pg. 3 ¶ 0035 - Pg. 4 ¶ 0038, Pg. 5 ¶ 0047 - 0053, Pg. 6 ¶ 0057 - 0059) - With regards to claim 12, Zhang et al. in view of Chen et al. in view of Marks disclose the device of Claim 10. Zhang et al. fail to disclose explicitly wherein a depth of each kernel in the depthwise convolution is reduced to 1 and each of a width and a height of each kernel in the pointwise convolution is reduced to 1. Pertaining to analogous art, Chen et al. disclose wherein a depth of each kernel in the depthwise convolution is reduced to 1 (Chen et al., Figs. 4, 5 & 8, Pg. 1 ¶ 0008, Pg. 3 ¶ 0036, Pg. 4 ¶ 0042 - 0045, Pg. 6 ¶ 0057 - 0059) and each of a width and a height of each kernel in the pointwise convolution is reduced to 1. (Chen et al., Figs. 4 & 6 - 8, Pg. 1 ¶ 0009, Pg. 3 ¶ 0035 - Pg. 4 ¶ 0038, Pg. 5 ¶ 0047 - 0053, Pg. 6 ¶ 0057 - 0059) - With regards to claim 15, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1, wherein the plurality of stages include two or more of an address stage, a takeaway stage, a back swing stage, a top-of-swing stage, a down swing stage, an impact stage, a follow-through stage, and a finish stage. (Zhang et al., Fig. 18A, Pg. 17 ¶ 0288 - 0291 [“when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons” and “In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose.” The Examiner asserts that, for example, the key frames corresponding to the beginning and ending key gestures that are determined by Zhang et al. correspond to address and finish stages, respectively, of a golf swing. Furthermore, the Examiner asserts that Fig. 18A of Zhang et al. illustrates a plurality of stages of partial motions for a user’s golf swing and a comparison target’s golf swing.]) In addition, analogous art Marks discloses wherein the plurality of stages include two or more of an address stage, a takeaway stage, a back swing stage, a top-of-swing stage, a down swing stage, an impact stage, a follow-through stage, and a finish stage. (Marks, Figs. 6 & 28, Pg. 1 ¶ 0007, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0085, Pg. 6 ¶ 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 16 ¶ 0180 - 0182, Pg. 17 ¶ 0198 - 0200 [“FIGS. 1A-1C each illustrate one of the three positions from which component information is obtained according to embodiments described herein. These positions are shown as the start position in FIG. 1A, the top of swing position in FIG. 1B and the finish position in FIG. 3C.”]) - With regards to claim 16, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1. Zhang et al. fail to disclose explicitly wherein when the predetermined stage is an address stage, a waist bend angle is determined as the comparison point. Pertaining to analogous art, Marks discloses wherein when the predetermined stage is an address stage, a waist bend angle is determined as the comparison point. (Marks, Figs. 2A, 2B, 6 & 10A - 10C, Pg. 1 ¶ 0021, Pg. 4 ¶ 0075 - 0076, Pg. 6 ¶ 0104 - 0105, Pg. 7 ¶ 0111 - 0113, Pg. 8 ¶ 0115) - With regards to claim 17, Zhang et al. in view of Chen et al. in view of Marks disclose the device of Claim 10, wherein the plurality of stages include two or more of an address stage, a takeaway stage, a back swing stage, a top-of-swing stage, a down swing stage, an impact stage, a follow-through stage, and a finish stage. (Zhang et al., Fig. 18A, Pg. 17 ¶ 0288 - 0291 [“when the user captures a video, the beginning of video may include redundant frames. To erase or ignore the redundant frames, key frames in the standard video are determined, for example, frames (F0, F1, . . . , Fn) of the beginning, ending and middle key gestures. In each image frame in the video taken by the user, image frames (F0′, F1′, . . . , Fn′) corresponding to key frames in the standard video are determined by calculating the similarly between skeletons” and “In operation 1805, the image streams of a standard pose and the image streams of the user pose may be adjusted, normalized, and synchronized for comparison between the two. In other words, the start point and the end point of the two poses—each of two streams of images—are synchronized for a precise and accurate comparison between the standard pose and the user pose.” The Examiner asserts that, for example, the key frames corresponding to the beginning and ending key gestures that are determined by Zhang et al. correspond to address and finish stages, respectively, of a golf swing. Furthermore, the Examiner asserts that Fig. 18A of Zhang et al. illustrates a plurality of stages of partial motions for a user’s golf swing and a comparison target’s golf swing.]) In addition, analogous art Marks discloses wherein the plurality of stages include two or more of an address stage, a takeaway stage, a back swing stage, a top-of-swing stage, a down swing stage, an impact stage, a follow-through stage, and a finish stage. (Marks, Figs. 6 & 28, Pg. 1 ¶ 0007, Pg. 4 ¶ 0075 - 0079, Pg. 5 ¶ 0084 - 0085, Pg. 6 ¶ 0104 - 0105, Pg. 10 ¶ 0126, Pg. 13 ¶ 0154, Pg. 16 ¶ 0180 - 0182, Pg. 17 ¶ 0198 - 0200 [“FIGS. 1A-1C each illustrate one of the three positions from which component information is obtained according to embodiments described herein. These positions are shown as the start position in FIG. 1A, the top of swing position in FIG. 1B and the finish position in FIG. 3C.”]) - With regards to claim 18, Zhang et al. in view of Chen et al. in view of Marks disclose the device of Claim 10. Zhang et al. fail to disclose explicitly wherein when the predetermined stage is an address stage, a waist bend angle is determined as the comparison point. Pertaining to analogous art, Marks discloses wherein when the predetermined stage is an address stage, a waist bend angle is determined as the comparison point. (Marks, Figs. 2A, 2B, 6 & 10A - 10C, Pg. 1 ¶ 0021, Pg. 4 ¶ 0075 - 0076, Pg. 6 ¶ 0104 - 0105, Pg. 7 ¶ 0111 - 0113, Pg. 8 ¶ 0115) Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. U.S. Publication No. 2019/0347826 A1 in view of Chen et al. U.S. Publication No. 2021/0200993 A1 in view of Marks U.S. Publication No. 2013/0316840 A1 as applied to claim 1 above, and further in view of Sooch U.S. Publication No. 2006/0252018 A1. - With regards to claim 5, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1. Zhang et al. fail to disclose explicitly wherein in the step of comparing the user’s golf swing and the comparison target’s golf swing, the comparison target's golf swing is a golf swing of the user photographed at a different point of time from the user's golf swing. Pertaining to analogous art, Sooch discloses wherein in the step of comparing the user’s golf swing and the comparison target’s golf swing, the comparison target's golf swing is a golf swing of the user photographed at a different point of time from the user's golf swing. (Sooch, Abstract, Pg. 1 ¶ 0016, Pg. 4 ¶ 0080, Pg. 5 Claim 1 [“The user also has options to compare his/her swings from different sessions”]) Zhang et al. in view of Chen et al. in view of Marks and Sooch are combinable because they are all directed towards image processing systems and, similar to Zhang et al. and Marks, Sooch is also directed towards an image processing system that automatically evaluates a user’s golf swing by processing image data of the user’s golf swing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Zhang et al. in view of Chen et al. in view of Marks with the teachings of Sooch. This modification would have been prompted in order to enhance the combined base device of Zhang et al. in view of Chen et al. in view of Marks with the well-known and applicable technique Sooch applied to a comparable device. Utilizing a golf swing of the user photographed at a different point of time from the user's golf swing as the comparison target's golf swing, as taught by Sooch, would enhance the combined base device by allowing for users to evaluate the progress they are making in improving their golf swings since their current golf swing would be able to be compared against their golf swing that was photographed at a different point of time in the past thereby helping them to understand what is and is not working in their golf swing practice and/or training. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a golf swing of the user photographed at a different point of time from the user's golf swing would be utilized as the comparison target's golf swing so as to provide users with the ability to easily and effectively evaluate progress they are making in improving their golf swings. Therefore, it would have been obvious to combine Zhang et al. in view of Chen et al. in view of Marks with Sooch to obtain the invention as specified in claim 5. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. U.S. Publication No. 2019/0347826 A1 in view of Chen et al. U.S. Publication No. 2021/0200993 A1 in view of Marks U.S. Publication No. 2013/0316840 A1 as applied to claim 1 above, and further in view of Zhang et al. U.S. Publication No. 2017/0064214 A1, herein referred to as “Ma et al.”. - With regards to claim 8, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1, wherein at least one of the detecting step, the step of estimating the user’s posture, the step of comparing the user’s golf swing and the comparison target’s golf swing, and the step of estimating the information on the user’s golf swing is performed according to the user. (Zhang et al., Pg. 9 ¶ 0162, Pg. 15 ¶ 0257 - 0262, Pg. 16 ¶ 0283 - Pg. 17 ¶ 0291) Zhang et al. fail to disclose explicitly wherein at least one step is performed according to a result of recognizing a voice from the user. Pertaining to analogous art, Ma et al. disclose wherein at least one of the detecting step, the step of estimating the user’s posture, the step of comparing the user’s golf swing and the comparison target’s golf swing, and the step of estimating the information on the user’s golf swing is performed according to a result of recognizing a voice from the user. (Ma et al., Figs. 13 - 16 & 20 - 21, Pg. 1 ¶ 0016 and 0020 - 0022, Pg. 4 ¶ 0089 - 0092, Pg. 6 ¶ 0138, Pg. 7 ¶ 0147 and 0159, Pg. 8 ¶ 0175 - 0176, Pg. 14 ¶ 0246 - 0247, 0252 and 0254 - 0257, Pg. 24 ¶ 0388) Zhang et al. in view of Chen et al. in view of Marks and Ma et al. are combinable because they are all directed towards image processing systems and, similar to Zhang et al., Ma et al. is also directed towards an image processing system that detects postures of people and determines consistency of postures. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Zhang et al. in view of Chen et al. in view of Marks with the teachings of Ma et al. This modification would have been prompted in order to enhance the combined base device of Zhang et al. in view of Chen et al. in view of Marks with the well-known and applicable technique Ma et al. applied to a similar device. Performing at least one of the steps according to a result of recognizing a voice from the user, as taught by Ma et al., would enhance the combined base device by simplifying its use for users since they would be able to initiate one or more of its steps in a hand’s free and intuitive manner as they are practicing their golf swing via voice commands thereby making it easier for them to simultaneously practice their golf swing and operate the combined base device. Furthermore, this modification would have been prompted by the teachings and suggestions of Zhang et al. that conventional technology includes voice recognition in cameras and that all of their functions may be started via user’s instructions, see at least page 8 paragraph 0144 and page 9 paragraph 0162 of Zhang et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that at least one of the steps of the combined base device would be performed according to a result of recognizing a voice from the user so as to simplify use of the combined base device for users and make it easier for them to simultaneously practice their golf swing and operate the combined base device. Therefore, it would have been obvious to combine Zhang et al. in view of Chen et al. in view of Marks with Ma et al. to obtain the invention as specified in claim 8. Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. U.S. Publication No. 2019/0347826 A1 in view of Chen et al. U.S. Publication No. 2021/0200993 A1 in view of Marks U.S. Publication No. 2013/0316840 A1 as applied to claims 1 and 10 above, and further in view of Wang et al. U.S. Publication No. 2020/0272888 A1. - With regards to claim 13, Zhang et al. in view of Chen et al. in view of Marks disclose the method of Claim 1, wherein the detecting step comprises the steps of: reconstructing the photographed image into a feature map image for the at least one joint of the user using the light-weighted artificial neural network model; (Zhang et al., Figs. 2 - 6, 9 & 10, Pg. 3 ¶ 0074 - Pg. 4 ¶ 0075, Pg. 4 ¶ 0084 - 0085, Pg. 6 ¶ 0106 and 0109 - 0113, Pg. 7 ¶ 0120 and 0125 - 0130, Pg. 9 ¶ 0159 - 0167) and detecting the at least one joint of the user by deriving the position of the at least one joint with reference to the at least one feature map image, (Zhang et al., Figs. 2 - 6, 9 & 10, Pg. 3 ¶ 0074 - Pg. 4 ¶ 0075, Pg. 4 ¶ 0084 - 0085, Pg. 6 ¶ 0106 and 0109 - 0113, Pg. 7 ¶ 0120 and 0125 - 0130, Pg. 9 ¶ 0159 - 0167, Pg. 17 ¶ 0297) and wherein a number of joints to be detected is limited such that at least one joint of importance not lower than a predetermined level is detected among joints of the user. (Zhang et al., Figs. 9 - 11, Pg. 9 ¶ 0166 - 0173, Pg. 10 ¶ 0176 - 0184, Pg. 11 ¶ 0195 - 0200, Pg. 11 ¶ 0203 - Pg. 12 ¶ 0206, Pg. 12 ¶ 0210 - 0213 and 0219, Pg. 18 ¶ 0311 - 0312) Zhang et al. fail to disclose explicitly a heat map image for each of the at least one joint of the user; and deriving the position of the at least one joint with reference to brightness values of points in each of the at least one heat map image. Pertaining to analogous art, Wang et al. disclose wherein the detecting step comprises the steps of: reconstructing the photographed image into a heat map image for each of the at least one joint of the user using the artificial neural network model; (Wang et al., Abstract, Figs. 2 - 4, 6 - 7D, 9 & 13, Pg. 2 ¶ 0024 - Pg. 3 ¶ 0028, Pg. 4 ¶ 0034, Pg. 5 ¶ 0045) and detecting the at least one joint of the user by deriving the position of the at least one joint with reference to brightness values of points in each of the at least one heat map image. (Wang et al., Abstract, Figs. 2 - 4, 7A - 7D, 9 & 13, Pg. 3 ¶ 0027 - 0028, Pg. 4 ¶ 0037, Pg. 5 ¶ 0045) Zhang et al. in view of Chen et al. in view of Marks and Wang et al. are combinable because they are all directed towards image processing systems that detect key points, such as joints, on humans and perform pose estimation and, similar to Zhang et al. and Chen et al., Wang et al. is also directed towards an image processing system that utilizes an artificial neural network models to detect key points, such as joints, on humans and perform pose estimation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Zhang et al. in view of Chen et al. in view of Marks with the teachings of Wang et al. This modification would have been prompted in order to substitute the joint detection process of Zhang et al. for the joint detection technique of Wang et al. The joint detection technique of Wang et al. could be substituted in place of the joint detection process of Zhang et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the joint detection technique of Wang et al., which produces heatmaps for each joint to be detected and derives a position of each joint using brightness values of points in each respective heatmap, would be utilized to detect and estimate positions of joints the user. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the joint detection technique of Wang et al., which produces heatmaps for each joint to be detected and derives a position of each joint using brightness values of points in each respective heatmap, would be utilized to detect and estimate the positions of the joints the user in the photographed image. Therefore, it would have been obvious to combine Zhang et al. in view of Chen et al. in view of Marks with Wang et al. to obtain the invention as specified in claim 13. - With regards to claim 14, Zhang et al. in view of Chen et al. in view of Marks disclose the device of Claim 10, wherein the one or more processors (Zhang et al., Figs. 7, 22 & 23, Pg. 1 ¶ 0005 - 0006, Pg. 3 ¶ 0066, Pg. 7 ¶ 0127 - 0128, Pg. 18 ¶ 0309 - 0313 and 0318, Pg. 19 ¶ 0321 - 0326) are configured to: reconstruct the photographed image into a feature map image for the at least one joint of the user using the light-weighted artificial neural network model; (Zhang et al., Figs. 2 - 6, 9 & 10, Pg. 3 ¶ 0074 - Pg. 4 ¶ 0075, Pg. 4 ¶ 0084 - 0085, Pg. 6 ¶ 0106 and 0109 - 0113, Pg. 7 ¶ 0120 and 0125 - 0130, Pg. 9 ¶ 0159 - 0167) and detect the at least one joint of the user by deriving the position of the at least one joint with reference to the at least one feature map image, (Zhang et al., Figs. 2 - 6, 9 & 10, Pg. 3 ¶ 0074 - Pg. 4 ¶ 0075, Pg. 4 ¶ 0084 - 0085, Pg. 6 ¶ 0106 and 0109 - 0113, Pg. 7 ¶ 0120 and 0125 - 0130, Pg. 9 ¶ 0159 - 0167, Pg. 17 ¶ 0297) and wherein a number of joints to be detected is limited such that at least one joint of importance not lower than a predetermined level is detected among joints of the user. (Zhang et al., Figs. 9 - 11, Pg. 9 ¶ 0166 - 0173, Pg. 10 ¶ 0176 - 0184, Pg. 11 ¶ 0195 - 0200, Pg. 11 ¶ 0203 - Pg. 12 ¶ 0206, Pg. 12 ¶ 0210 - 0213 and 0219, Pg. 18 ¶ 0311 - 0312) Zhang et al. fail to disclose explicitly a heat map image for each of the at least one joint of the user; and deriving the position of the at least one joint with reference to brightness values of points in each of the at least one heat map image. Pertaining to analogous art, Wang et al. disclose wherein the one or more processors (Wang et al., Abstract, Figs. 1 & 14, Pg. 2 ¶ 0022, Pg. 6 ¶ 0052 - 0057) are configured to: reconstruct the photographed image into a heat map image for each of the at least one joint of the user using the artificial neural network model; (Wang et al., Abstract, Figs. 2 - 4, 6 - 7D, 9 & 13, Pg. 2 ¶ 0024 - Pg. 3 ¶ 0028, Pg. 4 ¶ 0034, Pg. 5 ¶ 0045) and detect the at least one joint of the user by deriving the position of the at least one joint with reference to brightness values of points in each of the at least one heat map image. (Wang et al., Abstract, Figs. 2 - 4, 7A - 7D, 9 & 13, Pg. 3 ¶ 0027 - 0028, Pg. 4 ¶ 0037, Pg. 5 ¶ 0045) Zhang et al. in view of Chen et al. in view of Marks and Wang et al. are combinable because they are all directed towards image processing systems that detect key points, such as joints, on humans and perform pose estimation and, similar to Zhang et al. and Chen et al., Wang et al. is also directed towards an image processing system that utilizes an artificial neural network models to detect key points, such as joints, on humans and perform pose estimation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Zhang et al. in view of Chen et al. in view of Marks with the teachings of Wang et al. This modification would have been prompted in order to substitute the joint detection process of Zhang et al. for the joint detection technique of Wang et al. The joint detection technique of Wang et al. could be substituted in place of the joint detection process of Zhang et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the joint detection technique of Wang et al., which produces heatmaps for each joint to be detected and derives a position of each joint using brightness values of points in each respective heatmap, would be utilized to detect and estimate positions of joints the user. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the joint detection technique of Wang et al., which produces heatmaps for each joint to be detected and derives a position of each joint using brightness values of points in each respective heatmap, would be utilized to detect and estimate the positions of the joints the user in the photographed image. Therefore, it would have been obvious to combine Zhang et al. in view of Chen et al. in view of Marks with Wang et al. to obtain the invention as specified in claim 14. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC RUSH/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Apr 11, 2022
Application Filed
Jun 11, 2024
Non-Final Rejection — §103, §112
Sep 12, 2024
Response Filed
Dec 23, 2024
Final Rejection — §103, §112
Feb 26, 2025
Response after Non-Final Action
Apr 03, 2025
Request for Continued Examination
Apr 07, 2025
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §103, §112
Oct 30, 2025
Response Filed
Feb 07, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586229
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12548292
METHOD AND SYSTEM FOR IDENTIFYING REFLECTIONS IN THERMAL IMAGES
2y 5m to grant Granted Feb 10, 2026
Patent 12548395
SYSTEMS, METHODS AND DEVICES FOR MONITORING BETTING ACTIVITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12541856
MASKING OF OBJECTS IN AN IMAGE STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12518504
METHOD FOR CALIBRATING AN OBJECT RE-IDENTIFICATION SOLUTION IMPLEMENTING AN ARRAY OF A PLURALITY OF CAMERAS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
61%
Grant Probability
97%
With Interview (+36.2%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month