Prosecution Insights
Last updated: April 19, 2026
Application No. 17/976,812

Selecting and Reporting Objects Based on Events

Final Rejection §101§102§103§DP
Filed
Oct 30, 2022
Examiner
CASTILLO-TORRES, KEISHA Y
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Roboporter Ltd.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
80 granted / 108 resolved
+12.1% vs TC avg
Strong +30% interview lift
Without
With
+30.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
140
Total Applications
across all art units

Statute-Specific Performance

§101
26.2%
-13.8% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 07/21/2025. Claim 8 has been canceled by the Applicant. Claim 21 has been newly added by the Applicant. Claim(s) 1-7 and 9-21 are pending and have been examined. Hence, this action has been made FINAL. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments and Amendments Amendments to the claims by the Applicant have been considered and addressed below. With respect to the Drawing Objections and 35 USC § 101, 102, and 103 rejections, the Applicant provides several arguments in which the Examiner will respond accordingly, below. Drawing Objections Arguments in page 31 of the Remarks filed on 07/21/2025 Examiner’s Response to Arguments: Applicant’s arguments with respect to the drawing objections have been fully considered and are persuasive. The drawing objections have been withdrawn. 35 USC § 101 rejection(s) Arguments in page 31 of the Remarks filed on 07/21/2025 Examiner’s Response to Arguments: Arguments and amendments have been considered but these are not persuasive. For more details, please refer to updated 35 U.S.C. § 101 rejections for claims 1-7 and 9-21 below. 35 USC § 103 rejection(s) Arguments in pages 31 of the Remarks filed on 07/21/2025 Examiner’s Response to Arguments: Applicant’s arguments with respect to claim(s) 1 and 19-20 under 35 U.S.C. § 102 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Zhang et al. (US 20190366153 A1). For more details, please refer to updated 35 U.S.C. § 103 rejections for claims 1-7 and 9-21 below. Double Patenting Arguments in pages 32 of the Remarks filed on 07/21/2025 Examiner’s Response to Arguments: Request for the Double Patenting rejections to be held in abeyance granted. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-7 and 9-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of co-pending patent application (U.S. Patent Application No. 17/976,804) in view of Zhang et al. (US 20190366153 A1). The claims of the issued patent are similar in scope than that of the instant application. However, the claims of the co-pending patent application (U.S. Patent Application No. US 17/976,804) do not explicitly teach wherein the method, as presented in the instant application independent claims, comprise: calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data; identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object; receiving an indication of a first group of two or more events in the physical world caused by the first physical object; receiving an indication of a second group of two or more events in the physical world caused by the second physical object; based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object; based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object; Zhang et al. does teach wherein the method further comprises: calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data (see ¶ [0085]: “In process step 210, objects of interests are detected from frames of the input video. In particular, one or more convolutional neural networks (CNN) may be applied to identify desired objects including balls and players in the input video, and the detected objects are passed as input 215 to process step 220. Each CNN module may be trained using one or more prior input videos. In individual training sessions, only a single player is present, although multiple balls may be moving through the court if a basketball shooting machine is used. In multiple-player training sessions or games, multiple players and multiple balls may be present. A CNN utilizes the process of convolution to capture the spatial and temporal dependencies in an image, and to extract features from the input video for object detection. Feature extraction in turn enables the segmentations or identifications of image areas representing balls and players, and further analysis to determine player body postures. A ball moves through space, leading to changing size and location from video frame to video frame. A player also moves through space while handling the ball leading to both changing locations, sizes, and body postures.”); identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object (see Fig. 9B (players: 904-905, ball: 902, and basket/hoop: 980), ¶ [0085] citation as in limitation above. More specifically: “…A CNN utilizes the process of convolution to capture the spatial and temporal dependencies in an image, and to extract features from the input video for object detection. Feature extraction in turn enables the segmentations or identifications of image areas representing balls and players, and further analysis to determine player body postures. A ball moves through space, leading to changing size and location from video frame to video frame. A player also moves through space while handling the ball leading to both changing locations, sizes, and body postures.”, and ¶ [0154]: “Embodiments of the present invention may first detect the ball, determine a corresponding trajectory, then trace the ball trajectory to see if it ends in a shot attempt. For example, a box 902 in FIG. 9A represents a ball extraction result with confidence value of 1.000. Trajectory 903, represented as a dotted curve in FIG. 9A, may be reconstructed directly from a ball flow comprising a sequence of ball objects, or be generated by interpolating and/or extrapolating several known ball positions in air. Trajectory 903 represents a pass from player 905 to player 904, where the ball does not move above any of these two player's upper bodies, or come close to basket 980. By comparison, trajectory 963 in FIG. 9G corresponds to a shot attempt by shooter 904. Once a ball flow or trajectory such as 963 is determined, the ball flow can be examined to determine whether the ball has been thrown from the shooter's upper body upward, and if so, declare it as a shot attempt.” Here, the first physical object is analogous to player 904, while the second physical object is analogous to player 905.); receiving an indication of a first group of two or more events in the physical world caused by the first physical object (see ¶ [0020 and 0099-0100]: “[0020] In some embodiments, the shot event is selected from the group consisting of dribble event, jump event, catch-ball event, ball-leave-hand event, one-two leg jump, shooter's foot-on-ground movement, and the shot type is selected from the group consisting of layup, regular shot, dribble-pull-up, off-the-move, and catch-and-shoot. [0073] FIG. 1B is a flow diagram 190 providing a process overview of using a mobile device-based NEX system 150 to generate shot analytics and statistics, according to one embodiment of the present invention. This exemplary process takes as inputs a video segment or video stream, and/or a shooter's location in any given frame of the video input. Through new and novel methods for computer vision and algorithmic analysis, systems and devices implemented according to embodiments of the present invention extract various shot analytics, including, but are not limited to, shot type, release time, release angle, shooter body bend angle, leg bend ratio, moving speed and direction, and height of a jump event. The input video may be a live-stream, or an off-line recording, and may be a single perspective video, also known as a monocular video. [0099] With filtered flow and shot information 315, the NEX system may apply the remaining process steps in FIG. 3 to determine one or more shot events occurring before the ball-from-shooter time, and to generate one or more shot analytics 185 based on the one or more shot events, the shooter posture flow, and the related ball flow. In this disclosure, a “shot event” refers to player actions leading up to a shot attempt. That is, a shot event describes player movements before the ball leaves the shooter's hand in a shot attempt. A shot event may occur right before a shot is launched, or some time shortly before the shot is launched. [0100] In process step 320 shown in FIG. 3, several exemplary shot events are detected, for example, a dribble event, a jump event, a catch-ball event, as well as shooter movement in image space. Detected shot events, shooter movement in image space, shooter posture flow, and ball-shoot-from-hand time are used as input 325 to further processing steps 329, 330 and optionally 331, to determine one or more shot analytics.” Here, the Examiner notes that the indication of a first group of two or more events is read by disclosures in Zhang et al. regarding events associated with player 904 and the ball (e.g., shot event: dibble event, jump event, catch-ball event, etc.).); receiving an indication of a second group of two or more events in the physical world caused by the second physical object (see ¶ [0020 and 0099-0100] citations as in limitation above and further ¶ [0150]: “In table 810, raw information is divided into ball information 812, shooter information 814, events information 816, and scene information 818. For balls extracted from the input video, one or more ball flow and trajectories may be identified, and shot attempts may be determined based on the ball trajectories and their positions relative to the hoop. For the shooter, pose information may be determined from, for example, 18 key points on the body. Following a shot attempt trajectory, shooter poses may be detected in the region around the ball, and tracked as shooter poses. In some embodiments, more than one player may be present, and shooter information 814 may refer to player pose information and player posture flow as discussed with reference to FIGS. 1B to 4. In addition, shooter information 814 may be correlated with ball information 812 to determine different shot events such as ball-leave-hand, jump, dribble, and catch-ball events. Scene information 818 includes how hoop, court, and other relevant objects of interests are placed within the image domain, including hoop detection information and how court is placed in the image. Such scene information may be combined with other ball, shooter, and events information to generate shot analytics and/or game analytics, such as determining whether a shot is a 3-pointer or not.” Here, the Examiner notes that the indication of a second group of two or more events is read by disclosures in Zhang et al. regarding events associated with the second player 905 and the ball OR by disclosures of other scene information (e.g., hoop, court or other relevant objects present in the image).); based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object (see ¶ [0020, 0099-0100, and 150] citations as in limitation(s) above and further Fig. 11A (1142 (shooting info) and 1115 (jump event and ball-leave hand event)) and ¶ [0021]: “In some embodiments, the shot analytics is selected from the group consisting of release time, back angle, leg bend ratio, leg power, moving speed, moving direction, and height of jump.” Here, the determining to include in textual content a description based on the first group of two or more events is read by disclosures in Zhang et al. regarding shot analytics or the events associated with player 904 managing the ball (e.g., shot event: dibble event, jump event, catch-ball event, etc.). For example, as seen in Fig. 11A: “Shooting info”, “jump event, ball-leave-hand event”.); based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object (see ¶ [0020, 0099-0100, and 150] citations as in limitation(s) above and further Fig. 11A (1142 (shooting info) and 1115 (jump event and ball-leave hand event)) and ¶ [0021]: “In some embodiments, the shot analytics is selected from the group consisting of release time, back angle, leg bend ratio, leg power, moving speed, moving direction, and height of jump.” Here, the Examiner notes that no information regarding the additional player 905 OR scene information (e.g., hoop) are included in the shot analytics as disclosed in Zhang et al., wherein only shot analytics or the events associated with player 904 managing the ball are included (e.g., shot event: dibble event, jump event, catch-ball event, etc.). For example, as seen in Fig. 11A: “Shooting info”, “jump event, ball-leave-hand event”.); U.S. Patent Application No. US 17/976,804 in view of Zhang et al. (US 20190366153 A1) are considered to be analogous to the claimed invention because they are in the same field of endeavor in text/information generation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified U.S. Patent Application No. US 17/976,804 to incorporate the teachings of Zhang et al. of calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data; identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object; receiving an indication of a first group of two or more events in the physical world caused by the first physical object; receiving an indication of a second group of two or more events in the physical world caused by the second physical object; based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object; based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object; which provides the benefit optimizing the feature extraction process to not only reduce the overall computational complexity but also improve the achievable accuracy by tailoring to the specific small input and ball detection goal.([0134] of Zhang et al.). Claims 1-7 and 9-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of co-pending patent application (U.S. Patent Application No. 17/976,806) in view of Zhang et al. (US 20190366153 A1). The claims of the issued patent are similar in scope than that of the instant application. However, the claims of the co-pending patent application (U.S. Patent Application No. US 17/976,806) do not explicitly teach wherein the method, as presented in the instant application independent claims, comprise: calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data; identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object; receiving an indication of a first group of two or more events in the physical world caused by the first physical object; receiving an indication of a second group of two or more events in the physical world caused by the second physical object; based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object; based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object; Zhang et al. does teach wherein the method further comprises: calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data (see ¶ [0085]: “In process step 210, objects of interests are detected from frames of the input video. In particular, one or more convolutional neural networks (CNN) may be applied to identify desired objects including balls and players in the input video, and the detected objects are passed as input 215 to process step 220. Each CNN module may be trained using one or more prior input videos. In individual training sessions, only a single player is present, although multiple balls may be moving through the court if a basketball shooting machine is used. In multiple-player training sessions or games, multiple players and multiple balls may be present. A CNN utilizes the process of convolution to capture the spatial and temporal dependencies in an image, and to extract features from the input video for object detection. Feature extraction in turn enables the segmentations or identifications of image areas representing balls and players, and further analysis to determine player body postures. A ball moves through space, leading to changing size and location from video frame to video frame. A player also moves through space while handling the ball leading to both changing locations, sizes, and body postures.”); identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object (see Fig. 9B (players: 904-905, ball: 902, and basket/hoop: 980), ¶ [0085] citation as in limitation above. More specifically: “…A CNN utilizes the process of convolution to capture the spatial and temporal dependencies in an image, and to extract features from the input video for object detection. Feature extraction in turn enables the segmentations or identifications of image areas representing balls and players, and further analysis to determine player body postures. A ball moves through space, leading to changing size and location from video frame to video frame. A player also moves through space while handling the ball leading to both changing locations, sizes, and body postures.”, and ¶ [0154]: “Embodiments of the present invention may first detect the ball, determine a corresponding trajectory, then trace the ball trajectory to see if it ends in a shot attempt. For example, a box 902 in FIG. 9A represents a ball extraction result with confidence value of 1.000. Trajectory 903, represented as a dotted curve in FIG. 9A, may be reconstructed directly from a ball flow comprising a sequence of ball objects, or be generated by interpolating and/or extrapolating several known ball positions in air. Trajectory 903 represents a pass from player 905 to player 904, where the ball does not move above any of these two player's upper bodies, or come close to basket 980. By comparison, trajectory 963 in FIG. 9G corresponds to a shot attempt by shooter 904. Once a ball flow or trajectory such as 963 is determined, the ball flow can be examined to determine whether the ball has been thrown from the shooter's upper body upward, and if so, declare it as a shot attempt.” Here, the first physical object is analogous to player 904, while the second physical object is analogous to player 905.); receiving an indication of a first group of two or more events in the physical world caused by the first physical object (see ¶ [0020 and 0099-0100]: “[0020] In some embodiments, the shot event is selected from the group consisting of dribble event, jump event, catch-ball event, ball-leave-hand event, one-two leg jump, shooter's foot-on-ground movement, and the shot type is selected from the group consisting of layup, regular shot, dribble-pull-up, off-the-move, and catch-and-shoot. [0073] FIG. 1B is a flow diagram 190 providing a process overview of using a mobile device-based NEX system 150 to generate shot analytics and statistics, according to one embodiment of the present invention. This exemplary process takes as inputs a video segment or video stream, and/or a shooter's location in any given frame of the video input. Through new and novel methods for computer vision and algorithmic analysis, systems and devices implemented according to embodiments of the present invention extract various shot analytics, including, but are not limited to, shot type, release time, release angle, shooter body bend angle, leg bend ratio, moving speed and direction, and height of a jump event. The input video may be a live-stream, or an off-line recording, and may be a single perspective video, also known as a monocular video. [0099] With filtered flow and shot information 315, the NEX system may apply the remaining process steps in FIG. 3 to determine one or more shot events occurring before the ball-from-shooter time, and to generate one or more shot analytics 185 based on the one or more shot events, the shooter posture flow, and the related ball flow. In this disclosure, a “shot event” refers to player actions leading up to a shot attempt. That is, a shot event describes player movements before the ball leaves the shooter's hand in a shot attempt. A shot event may occur right before a shot is launched, or some time shortly before the shot is launched. [0100] In process step 320 shown in FIG. 3, several exemplary shot events are detected, for example, a dribble event, a jump event, a catch-ball event, as well as shooter movement in image space. Detected shot events, shooter movement in image space, shooter posture flow, and ball-shoot-from-hand time are used as input 325 to further processing steps 329, 330 and optionally 331, to determine one or more shot analytics.” Here, the Examiner notes that the indication of a first group of two or more events is read by disclosures in Zhang et al. regarding events associated with player 904 and the ball (e.g., shot event: dibble event, jump event, catch-ball event, etc.).); receiving an indication of a second group of two or more events in the physical world caused by the second physical object (see ¶ [0020 and 0099-0100] citations as in limitation above and further ¶ [0150]: “In table 810, raw information is divided into ball information 812, shooter information 814, events information 816, and scene information 818. For balls extracted from the input video, one or more ball flow and trajectories may be identified, and shot attempts may be determined based on the ball trajectories and their positions relative to the hoop. For the shooter, pose information may be determined from, for example, 18 key points on the body. Following a shot attempt trajectory, shooter poses may be detected in the region around the ball, and tracked as shooter poses. In some embodiments, more than one player may be present, and shooter information 814 may refer to player pose information and player posture flow as discussed with reference to FIGS. 1B to 4. In addition, shooter information 814 may be correlated with ball information 812 to determine different shot events such as ball-leave-hand, jump, dribble, and catch-ball events. Scene information 818 includes how hoop, court, and other relevant objects of interests are placed within the image domain, including hoop detection information and how court is placed in the image. Such scene information may be combined with other ball, shooter, and events information to generate shot analytics and/or game analytics, such as determining whether a shot is a 3-pointer or not.” Here, the Examiner notes that the indication of a second group of two or more events is read by disclosures in Zhang et al. regarding events associated with the second player 905 and the ball OR by disclosures of other scene information (e.g., hoop, court or other relevant objects present in the image).); based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object (see ¶ [0020, 0099-0100, and 150] citations as in limitation(s) above and further Fig. 11A (1142 (shooting info) and 1115 (jump event and ball-leave hand event)) and ¶ [0021]: “In some embodiments, the shot analytics is selected from the group consisting of release time, back angle, leg bend ratio, leg power, moving speed, moving direction, and height of jump.” Here, the determining to include in textual content a description based on the first group of two or more events is read by disclosures in Zhang et al. regarding shot analytics or the events associated with player 904 managing the ball (e.g., shot event: dibble event, jump event, catch-ball event, etc.). For example, as seen in Fig. 11A: “Shooting info”, “jump event, ball-leave-hand event”.); based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object (see ¶ [0020, 0099-0100, and 150] citations as in limitation(s) above and further Fig. 11A (1142 (shooting info) and 1115 (jump event and ball-leave hand event)) and ¶ [0021]: “In some embodiments, the shot analytics is selected from the group consisting of release time, back angle, leg bend ratio, leg power, moving speed, moving direction, and height of jump.” Here, the Examiner notes that no information regarding the additional player 905 OR scene information (e.g., hoop) are included in the shot analytics as disclosed in Zhang et al., wherein only shot analytics or the events associated with player 904 managing the ball are included (e.g., shot event: dibble event, jump event, catch-ball event, etc.). For example, as seen in Fig. 11A: “Shooting info”, “jump event, ball-leave-hand event”.); U.S. Patent Application No. US 17/976,806 in view of Zhang et al. (US 20190366153 A1) are considered to be analogous to the claimed invention because they are in the same field of endeavor in text/information generation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified U.S. Patent Application No. US 17/976,806 to incorporate the teachings of Zhang et al. of calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data; identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object; receiving an indication of a first group of two or more events in the physical world caused by the first physical object; receiving an indication of a second group of two or more events in the physical world caused by the second physical object; based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object; based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object; which provides the benefit optimizing the feature extraction process to not only reduce the overall computational complexity but also improve the achievable accuracy by tailoring to the specific small input and ball detection goal.([0134] of Zhang et al.). Claims 1-7 and 9-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of co-pending patent application (U.S. Patent Application No. 17/976,807) in view of Zhang et al. (US 20190366153 A1). The claims of the issued patent are similar in scope than that of the instant application. However, the claims of the co-pending patent application (U.S. Patent Application No. US 17/976,807) do not explicitly teach wherein the method, as presented in the instant application independent claims, comprise: receiving an indication of a plurality of objects, the plurality of objects includes at least a first physical object and a second physical object; calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data; identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object; receiving an indication of a first group of two or more events in the physical world caused by the first physical object; receiving an indication of a second group of two or more events in the physical world caused by the second physical object; based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object; based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object; Zhang et al. does teach wherein the method further comprises: calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data (see ¶ [0085]: “In process step 210, objects of interests are detected from frames of the input video. In particular, one or more convolutional neural networks (CNN) may be applied to identify desired objects including balls and players in the input video, and the detected objects are passed as input 215 to process step 220. Each CNN module may be trained using one or more prior input videos. In individual training sessions, only a single player is present, although multiple balls may be moving through the court if a basketball shooting machine is used. In multiple-player training sessions or games, multiple players and multiple balls may be present. A CNN utilizes the process of convolution to capture the spatial and temporal dependencies in an image, and to extract features from the input video for object detection. Feature extraction in turn enables the segmentations or identifications of image areas representing balls and players, and further analysis to determine player body postures. A ball moves through space, leading to changing size and location from video frame to video frame. A player also moves through space while handling the ball leading to both changing locations, sizes, and body postures.”); identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object (see Fig. 9B (players: 904-905, ball: 902, and basket/hoop: 980), ¶ [0085] citation as in limitation above. More specifically: “…A CNN utilizes the process of convolution to capture the spatial and temporal dependencies in an image, and to extract features from the input video for object detection. Feature extraction in turn enables the segmentations or identifications of image areas representing balls and players, and further analysis to determine player body postures. A ball moves through space, leading to changing size and location from video frame to video frame. A player also moves through space while handling the ball leading to both changing locations, sizes, and body postures.”, and ¶ [0154]: “Embodiments of the present invention may first detect the ball, determine a corresponding trajectory, then trace the ball trajectory to see if it ends in a shot attempt. For example, a box 902 in FIG. 9A represents a ball extraction result with confidence value of 1.000. Trajectory 903, represented as a dotted curve in FIG. 9A, may be reconstructed directly from a ball flow comprising a sequence of ball objects, or be generated by interpolating and/or extrapolating several known ball positions in air. Trajectory 903 represents a pass from player 905 to player 904, where the ball does not move above any of these two player's upper bodies, or come close to basket 980. By comparison, trajectory 963 in FIG. 9G corresponds to a shot attempt by shooter 904. Once a ball flow or trajectory such as 963 is determined, the ball flow can be examined to determine whether the ball has been thrown from the shooter's upper body upward, and if so, declare it as a shot attempt.” Here, the first physical object is analogous to player 904, while the second physical object is analogous to player 905.); receiving an indication of a first group of two or more events in the physical world caused by the first physical object (see ¶ [0020 and 0099-0100]: “[0020] In some embodiments, the shot event is selected from the group consisting of dribble event, jump event, catch-ball event, ball-leave-hand event, one-two leg jump, shooter's foot-on-ground movement, and the shot type is selected from the group consisting of layup, regular shot, dribble-pull-up, off-the-move, and catch-and-shoot. [0073] FIG. 1B is a flow diagram 190 providing a process overview of using a mobile device-based NEX system 150 to generate shot analytics and statistics, according to one embodiment of the present invention. This exemplary process takes as inputs a video segment or video stream, and/or a shooter's location in any given frame of the video input. Through new and novel methods for computer vision and algorithmic analysis, systems and devices implemented according to embodiments of the present invention extract various shot analytics, including, but are not limited to, shot type, release time, release angle, shooter body bend angle, leg bend ratio, moving speed and direction, and height of a jump event. The input video may be a live-stream, or an off-line recording, and may be a single perspective video, also known as a monocular video. [0099] With filtered flow and shot information 315, the NEX system may apply the remaining process steps in FIG. 3 to determine one or more shot events occurring before the ball-from-shooter time, and to generate one or more shot analytics 185 based on the one or more shot events, the shooter posture flow, and the related ball flow. In this disclosure, a “shot event” refers to player actions leading up to a shot attempt. That is, a shot event describes player movements before the ball leaves the shooter's hand in a shot attempt. A shot event may occur right before a shot is launched, or some time shortly before the shot is launched. [0100] In process step 320 shown in FIG. 3, several exemplary shot events are detected, for example, a dribble event, a jump event, a catch-ball event, as well as shooter movement in image space. Detected shot events, shooter movement in image space, shooter posture flow, and ball-shoot-from-hand time are used as input 325 to further processing steps 329, 330 and optionally 331, to determine one or more shot analytics.” Here, the Examiner notes that the indication of a first group of two or more events is read by disclosures in Zhang et al. regarding events associated with player 904 and the ball (e.g., shot event: dibble event, jump event, catch-ball event, etc.).); receiving an indication of a second group of two or more events in the physical world caused by the second physical object (see ¶ [0020 and 0099-0100] citations as in limitation above and further ¶ [0150]: “In table 810, raw information is divided into ball information 812, shooter information 814, events information 816, and scene information 818. For balls extracted from the input video, one or more ball flow and trajectories may be identified, and shot attempts may be determined based on the ball trajectories and their positions relative to the hoop. For the shooter, pose information may be determined from, for example, 18 key points on the body. Following a shot attempt trajectory, shooter poses may be detected in the region around the ball, and tracked as shooter poses. In some embodiments, more than one player may be present, and shooter information 814 may refer to player pose information and player posture flow as discussed with reference to FIGS. 1B to 4. In addition, shooter information 814 may be correlated with ball information 812 to determine different shot events such as ball-leave-hand, jump, dribble, and catch-ball events. Scene information 818 includes how hoop, court, and other relevant objects of interests are placed within the image domain, including hoop detection information and how court is placed in the image. Such scene information may be combined with other ball, shooter, and events information to generate shot analytics and/or game analytics, such as determining whether a shot is a 3-pointer or not.” Here, the Examiner notes that the indication of a second group of two or more events is read by disclosures in Zhang et al. regarding events associated with the second player 905 and the ball OR by disclosures of other scene information (e.g., hoop, court or other relevant objects present in the image).); based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object (see ¶ [0020, 0099-0100, and 150] citations as in limitation(s) above and further Fig. 11A (1142 (shooting info) and 1115 (jump event and ball-leave hand event)) and ¶ [0021]: “In some embodiments, the shot analytics is selected from the group consisting of release time, back angle, leg bend ratio, leg power, moving speed, moving direction, and height of jump.” Here, the determining to include in textual content a description based on the first group of two or more events is read by disclosures in Zhang et al. regarding shot analytics or the events associated with player 904 managing the ball (e.g., shot event: dibble event, jump event, catch-ball event, etc.). For example, as seen in Fig. 11A: “Shooting info”, “jump event, ball-leave-hand event”.); based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object (see ¶ [0020, 0099-0100, and 150] citations as in limitation(s) above and further Fig. 11A (1142 (shooting info) and 1115 (jump event and ball-leave hand event)) and ¶ [0021]: “In some embodiments, the shot analytics is selected from the group consisting of release time, back angle, leg bend ratio, leg power, moving speed, moving direction, and height of jump.” Here, the Examiner notes that no information regarding the additional player 905 OR scene information (e.g., hoop) are included in the shot analytics as disclosed in Zhang et al., wherein only shot analytics or the events associated with player 904 managing the ball are included (e.g., shot event: dibble event, jump event, catch-ball event, etc.). For example, as seen in Fig. 11A: “Shooting info”, “jump event, ball-leave-hand event”.); U.S. Patent Application No. US 17/976,807 in view of Zhang et al. (US 20190366153 A1) are considered to be analogous to the claimed invention because they are in the same field of endeavor in text/information generation. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified U.S. Patent Application No. US 17/976,807 to incorporate the teachings of Zhang et al. of calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data; identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object; receiving an indication of a first group of two or more events in the physical world caused by the first physical object; receiving an indication of a second group of two or more events in the physical world caused by the second physical object; based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object; based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object; which provides the benefit optimizing the feature extraction process to not only reduce the overall computational complexity but also improve the achievable accuracy by tailoring to the specific small input and ball detection goal.([0134] of Zhang et al.). Claims 1-7 and 9-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of co-pending patent application (U.S. Patent Application No. 17/976,808) in view of Zhang et al. (US 20190366153 A1). The claims of the issued patent are similar in scope than that of the instant application. However, the claims of the co-pending patent application (U.S. Patent Application No. US 17/976,808) do not explicitly teach wherein the method, as presented in the instant application independent claims, comprise: calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data; identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object; receiving an indication of a first group of two or more events in the physical world caused by the first physical object; receiving an indication of a second group of two or more events in the physical world caused by the second physical object; based on the first group of two or more events, determining to include in a textual content a description based on the first group of two or more events of the first physical object; based on the second group of two or more events, determining not to include in the textual content any description based on the second group of two or more events of the second physical object; Zhang et al. does teach wherein the method further comprises: calculating a convolution of at least part of the image data to thereby obtain a result value of the calculated convolution of the at least part-of the image data (see ¶ [0085]: “In process step 210, objects of interests are detected from frames of the input video. In particular, one or more convolutional neural networks (CNN) may be applied to identify desired objects including balls and players in the input video, and the detected objects are passed as input 215 to process step 220. Each CNN module may be trained using one or more prior input videos. In individual training sessions, only a single player is present, although multiple balls may be moving through the court if a basketball shooting machine is used. In multiple-player training sessions or games, multiple players and multiple balls may be present. A CNN utilizes the process of convolution to capture the spatial and temporal dependencies in an image, and to extract features from the input video for object detection. Feature extraction in turn enables the segmentations or identifications of image areas representing balls and players, and further analysis to determine player body postures. A ball moves through space, leading to changing size and location from video frame to video frame. A player also moves through space while handling the ball leading to both changing locations, sizes, and body postures.”); identifying a plurality of physical objects based on the result value of the calculated convolution of the at least part of the image data, the plurality of physical objects includes at least a first physical object and a second physical object (see Fig. 9B (players: 904-905, ball: 902, and basket/hoop: 980), ¶ [0085] citation as in limitation above. More specifically: “…A CNN utilizes the process of convolution to capture the spatial and temporal dependencies in an image, and to extract features from the input video for object detection. Feature extraction in turn enables the segmentations or identifications of image areas representing balls and players, and further analysis to determine player body postures. A ball moves through space, leading to changing size and location from video frame to video frame. A player also moves through space while handling the ball leading to both changing locations, sizes, and body postures.”, and ¶ [0154]: “Embodiments of the present invention may first detect the ball, determine a corresponding trajectory, then trace the ball trajectory to see if it ends in a shot attempt. For example, a box 902 in FIG. 9A represents a ball extraction result with confidence value of 1.000. Trajectory 903, represented as a dotted curve in FIG. 9A, may be reconstructed directly from a ball flow comprising a sequence of ball objects, or be generated by interpolating and/o
Read full office action

Prosecution Timeline

Oct 30, 2022
Application Filed
Mar 19, 2025
Non-Final Rejection — §101, §102, §103
Jul 21, 2025
Response Filed
Oct 08, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573402
GENERATING AND/OR UTILIZING UNINTENTIONAL MEMORIZATION MEASURE(S) FOR AUTOMATIC SPEECH RECOGNITION MODEL(S)
2y 5m to grant Granted Mar 10, 2026
Patent 12536989
Language-agnostic Multilingual Modeling Using Effective Script Normalization
2y 5m to grant Granted Jan 27, 2026
Patent 12531050
VOICE DATA CREATION DEVICE
2y 5m to grant Granted Jan 20, 2026
Patent 12499332
TRANSLATING TEXT USING GENERATED VISUAL REPRESENTATIONS AND ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Dec 16, 2025
Patent 12488180
SYSTEMS AND METHODS FOR GENERATING DIALOG TREES
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.5%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month